NAME: Gustavo Leite de Mendonça Chaves
EMAIL: gustavo@gnustavo.com
LOCATION: Campinas, SP, Brazil
UPDATED: 2018-01-02
Hi. My name is Gustavo Leite de Mendonça Chaves. I was born in Poços de Caldas, M.G., Brasil, in 1966. I came to Campinas, S.P., in 1984 to study Computer Science at UNICAMP where I got a bachelor degree in 1989. I've attended several post-graduate courses but I stopped short of getting the masters degree. I'm married. We have a son and a daughter.
I started working at CPqD in 1989 as a systems programmer in the group responsible for the implementation of the CHILL Programming Tools that were used in the development of the Trópico RA switching system. There I worked in the implementation of compilers, debuggers, and run-time libraries, which was much fun. In 1996 I it became clear to me that I was too much involved in the "programming language" area but was lacking some "systems" knowledge. Hence, I changed gears and started working as a systems and network administrator. In 2008 I got back to the programming area from a different angle. Since then I lead the team responsible for maintaining the software development tools at CPqD which has been great.
Along the way I got an MBA in Enterprise Management from ESAMC in 2006. It didn't hurt (much).
This is not a complete history of my professional career. These are some of the things I feel proud of having done.
The original JIRA SOAP API was deprecated and replaced by a new REST API. So, I implemented this Perl module to replace the older JIRA::Client. It’s available at CPAN and as a Debian package.
In 2012 we begun using Git at CPqD, with a Gerrit server. I implemented this Perl module inicially inspired in my previous SVN::Hooks in order to make it possible to enforce some commit policies in our repositories. It’s available at CPAN.
I led a project at CPqD which substituted Google Apps for our old on premises e-mail and calendaring system. The project affected 1.500 users and was a true success with more than 90% of the users evaluating the system as good or excelent 10 months after the change. A good deal of attention to change management was needed to keep users informed and avoid the usual pitfalls that plague big projects affecting lots of people. Two things in which I invested significant time was in maintaining a FAQ (which by February, 2012 counts almost 90 entries) and in producing more than 30 short videos with specific instructions for users.
I participated again in Papotech's episode 125 talking about the 20th aniversary of Linux.
This small Perl module makes it easier to interact with JIRA using its SOAP API. It is available at CPAN and as a Debian package. I used it to implement several integrations with CPqD’s internal JIRA server. It has received the attention of countless (about two) people and I have already received code contributions to enhance it. It’s deprecated now since JIRA’s SOAP API was deprecated.
I led a project at CPqD with the goal of integrating, configuring, and supporting a set of software development tools to be our standard tool bag. In the course of this project I had the opportunity to develop and distribute a framework to build Subversion hooks called SVN::Hooks. It's indexed at CPAN and has been included in Debian's development version. I learned a great deal trying to satisfy the CPAN Testers people.
In 2008 a led a project called CPqD Developer Suite (CDS) with the objective was to set up a new standard suite of software development tools for CPqD. The strategy was to utilize well known free software or non-expensive tools and integrate them in CPqD's software development process. Subversion, JIRA, Enterprise Architect, Jenkins, Nexus, Sonar and TestLink are some of the tools in the suite which is being used by almost 900 people as of February, 2012. I wrote a paper (in Portuguese) about it here.
Papotech is a well known Brazilian technology podcast performed weekly by two guys in Americana, SP. I was interviewed by them twice in the episodes 47 and 68. I talked mostly about the history and the economics of Free Software.
I implemented a cool hack that I couldn't find described anywhere else. I friend of mine introduced me to HTTP Replicator, a caching proxy well suited to cache Linux packages repositories. As soon as we deployed it we started to see lots of other uses for it. One thing annoyed some of our users, though. They didn't like to have to remember to explicitly configure the proxy settings in their tools.
Our squid corporate proxy is accessed transparently by our users. As I looked at its logs I noticed that some file extensions are very good hints of big files, like 'iso', 'rpm', and 'deb'. I configured squid to automatically delegate all requests for a list of extensions to a HTTP Replicator cache in the same machine. The results were very good. We managed to decrease by almost 20% the traffic in our Internet connection. Our complaining users were able to forget about proxies and to access the Internet "directly" with the same performance. And lots of users felt the "increase" in Internet speed whenever they tried to download a popular file.
Best of all, this is a configure-once-and-forget type of system.
As I read Mark Jason Dominus' Higher-Order Perl I spotted some errors and sent them to him. I got mentioned in the book's errata pages.
Prorouter is a promiscuous router that can sit between an internal and an external network providing routing services to machines in the internal network even if their IP stack are statically configured for foreign networks. That is, you can take a machine with any statically configured IP address, netmask, default router, default SMTP server, default HTTP proxy, and plug it in the internal network. Prorouter will automatically figure out everything it has to do (in terms of iptable's configuration) so that the machine will be able to navigate as if it had everything dynamically configured.
At first I didn't see the point or even the possibility of implementing such a thing. Those who asked for it told me that it's purpose was to be used in hotels that provide Internet connection, to decrease the number of support calls. I suppose this type of problem is no longer an issue, but I don't have any evidence in either direction.
The interesting part of the story is that the people at CPqD who wanted prorouter didn't have the skills and were going to contract a third party to do it. Then, my chief at the time asked me if I could do it. I thought about it for a week and told them that I could do it in two months. They were surprised. The third party promised to deliver in six months for several times the price. They called me in and, very politely, asked me if I was sure I could do it. I knew the third party and that they were good, but I was pretty sure of my ways. I delivered it on time and on budget.
At first I thought I would have to dig in the Linux kernel to implement some functionality. But it quickly became apparent that all I needed was tcpdump, iptables and Perl. Prorouter is a 1392 lines Perl script. 803, if one doesn't count the POD documentation.
Unfortunately, I can't publish it, as it's neither mine nor free software.
In 2002 I was asked to study the problems facing the installation and the updating of software in the Windows platform. The result I documented in a report (in Portuguese) which explains the problem know as DLL Hell, by tracing its origin in Windows and showing why it doesn't plague other operating systems like Unix.
VNCpool is a daemon that manages a pool of VNC servers, replacing the standard vncserver daemon. On the client side, you use a small script called vncpick that connects to vncpool, asks for an unused Xvnc server and starts up a vncviewer to connect to it. The idea is to provide several instances of VNC to lots of clients without having to start them all at once and without fixing their TCP ports. VNCpool is administered with text commands via a specific TCP connection using telnet or netcat.
VNCpool is 1270 lines of very compact and not very readable Perl. (BTW, I do think it's possible to write readable Perl. I just didn't do it in vncpool.) I used the opportunity of writing it to exercise several things I learned by reading W. Richard Stevens' excelent books on TCP/IP and Network Programming.
Unfortunately, I can't publish it, as it's neither mine nor free software.
As I read Brian Kernighan's classic The Practice of Programming I got suspicious of his take on the Markov Algorithm in Perl because he implied that it could not be easily made as general as the versions he wrote in C, C++, and Java. I generalized his Perl program and submitted it to him. I got thrilled when I received this:
Message-Id: <199905242337.TAA12888@nslocum.cs.bell-labs.com>
Date: Mon, 24 May 1999 19:33:45 -0400
To: gustavo@cpqd.com.br
From: "Brian Kernighan" <bwk@research.bell-labs.com>
Subject: re: Another implementation of markov in Perl
content-length: 438
hi --
thank you for your generalized perl version. i ran it through
our test scripts and it works fine and runs about the same
speed as well. i think that in hindsight we would have done
better to write the perl and awk programs to solve the same
problem as the c, c++ and java. as you point out, it's not
much more code and it makes the comparison more direct.
next time!
thanks for writing, and for the effort you put in.
brian k
I found a nasty bug in rsync and was able to code a patch for it, which I sent to Andrew Tridgell. The problem was solved but I never knew if my patch was used or not.
One of the annoyances I initially had when I began administering Linux machines after having administered Solaris machines for years was the very limited functionality of the Linux automounter. We distributed the automounter configuration via NIS maps and some missing features of the Linux automounter were needed to integrate the Linux machines in our Unix environment. Things like direct maps and multi-mount entries were an absolute requirement.
I was able to emulate these features by means of a Perl script (automap) and a small patch to the automounter init script (/etc/init.d/autofs).
The Linux automounter has caught up and today it does almost everything we need but a complete implementation of direct maps. We still use automap because of that.
Part of my duty as IT planner is to evaluate the needs of the IT infrastructure at CPqD. In 1999 one project was going to purchase a new and expensive server. Their justification was that they had a system that run very slowly in the existing server. I was asked to investigate if there was a real need for new hardware or if there was something that we could do at the operating system level to better accommodate their system.
The system was a bunch of AWK scripts that processed large text files and produced reports. Since I was becoming a fan of Perl at that time I told them that I wanted to see if we could bennefit from converting some of the scripts to Perl. They let me re-implement one script. The test they gave me took about 90 minutes to run on the server. My reimplementation in Perl took about 2 minutes on my desktop.
Of course it wasn't just Perl. I used a different algorithm altogether so that I couldn't really tell how much of the increase in performance was due to Perl or to the new algorithm. But that didn't matter. They didn't get to buy the new server and started to use Perl in lots of other places.
As I read Bjarne Stroustrup's The C++ Programming Language (3rd Edition) I kept taking notes of small errors and typos. I sent them to him and got a mention in the book's errata page and a check!
When I joined the Unix team at CPqD we were responsible for more than 400 SunOS workstations. But there was no infrastructure to maintain their configuration automatically. Even the installations were done by hand, one machine at a time.
The first thing we tried to automate was the installation, using the then new Solaris jumpstart technology. After that, we needed something to keep the configuration of the operating system in sync with our master plan. I quickly studied the then new cfengine tool but dismissed it mostly because I felt it difficult to understand. I decided to implement our infrastructure as a set of shell scripts that we ended up calling checkconfig.
Checkconfig is still running every night on almost every Unix machine at CPqD, checking their configuration and automating any change we want to perform globally.
In retrospect I think it was a dumb idea to dismiss cfengine. We didn't have the energy or the skills to make checkconfig do 10% of what cfengine can do today. Anyway I learned a lot while implementing checkconfig.
As I knew that rms was coming to Campinas I invited him to come to CPqD in order to give a speech and to be interviewed by me and the other editors of our internal IT bulletin, called ConteXto. Stallman is a very intense and somewhat eccentric person. He marked that day in my life.
I was reading Donald Knuth's Literate Programming at about the same time I bought Jethro Tull's Aqualung album. Its cover had a poem that intrigued me. Not so much because of its literary quality, but because it contained some words that I was using at that time in my C++ programming. Words like 'void', 'cast', and 'create'. I don't remember how the idea entered my mind, but the poem inspired me to write a meaningless program. The poem turned out to be the comments for a C++ program. That's why I called it "Literate Commenting".
I don't think Knuth would like it at all. But I consider it very funny.
CSD was my main project during my work in the CHILL group. My first task was to take it from version 2.0 to version 3.0. It's interface was heavily influenced by the VMS's Pascal Debugger. It was coded in CHILL and run on the Trópico RA switching system boards that were developed at CPqD.
I still vividly remember when some developer called me to fix some CSD problem. As I entered the lab I always noticed nervously that about half of the terminals had the cursor blinking after the CSD prompt. The problem was that due to a weird feature of the operating system, CSD was unable to debug itself and I had to resort to the assembly monitor to debug it. With time I became very good at following code and untangling the stack of CSD memory dumps. It gave me that sort of warm fuzzy feeling that gamers feel when they master a difficult game, I suppose.
The original CSD ended life in version 3.2. In 1994 we started a new version from scratch whose main goal was to offer a remote debugging interface that could run on our Sun workstations. It had a small CHILL monitor running on the switching boards which talked via serial line to a proxy on a workstation. The proxy talked to the main program via TCP. The main program was almost 40000 lines of C++. At that time, C++ wasn't mature. I used the GNU C++ compiler and it didn't implement even the Standard Template Library. It also didn't have exceptions. It was a very interesting time to program in a language being developed.
Unfortunately, CSD 4.0 was aborted in alpha stage. I learned a lot while implementing it. In fact, it was the first time I was implementing a large program in a mainstream platform (SunOS 4), which made me realise how little I knew about systems programming and networking. Until then, I have implemented small programs in VMS Pascal, and lots of programs in CHILL.
I can't publish all its code.
In 1992 I participated in the CCITT (now ITU-T) meeting of the CHILL group in Geneva, where I presented a few small contributions to amend the language. As far as I remember, all of them were accepted and incorporated in the next version of the language specification. You can read the contributions num, receive, state, and upper.
In the days when the Pascal programming language and its derivatives (Ada, CHILL, etc.) ruled the day I devised a way to optimize the code generated to implement nested procedures. In order to allow a nested procedure to access the local variables of its outer procedures it's necessary to maintain what's called a display or, alternativelly, a static chain. The standard implementation of these structures impose a small overhead at the entry and at the exit points of every outer procedure.
Relaxed displays and static chains are similar structures implemented in a way that avoids some of the overhead. I suppose they might have been invented by others, but I invented them by myself.
Got to my About.me page!