Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server

29 Comments

In April 1997, Netscape ReleaseEngineers wrote, and started running, the world’s first? second? continuous integration server. Now, just over 17 years later, in May 2014, the tinderbox server was finally turned off. Permanently.

This is a historic moment for Mozilla, and for the software industry in general, so I thought people might find it interesting to get some background, as well as outline the assumptions we changed when designing the replacement Continuous Integration and Release Engineering infrastructure now in use at Mozilla.


At Netscape, developers would checkin a code change, and then go home at night, without knowing if their change broke anything. There were no builds during the day.

Instead, developers would have to wait until the next morning to find out if their change caused any problems. At 10am each morning, Netscape RelEng would gather all the checkins from the previous day, and manually start to build. Even if a given individual change was “good”, it was frequently possible for a combination of “good” changes to cause problems. In fact, as this was the first time that all the checkins from the previous day were compiled together, or “integrated” together, surprise build breakages were common.

This integration process was so fragile that all developers who did checkins in a day had to be in the office before 10am the next morning to immediately help debug any problems that arose with the build. Only after the 10am build completed successfully were Netscape developers allowed to start checking-in more code changes on top of what was now proven to be good code. If you were lucky, this 10am build worked first time, took “only” a couple of hours, and allowed new checkins to start lunchtime-ish. However, this 10am build was frequently broken, causing checkins to remain blocked until the gathered developers and release engineers figured out which change caused the problem and fixed it.

Fixing build bustages like this took time, and lots of people, to figure out which of all the checkins that day caused the problem. Worst case, some checkins were fine by themselves, but cause problems when combined with, or integrated with, other changes, so even the best-intentioned developer could still “break the build” in non-obvious ways. Sometimes, it could take all day to debug and fix the build problem – no new checkins happened on those days, halting all development for the entire day. More rare, but not unheard of, was that the build bustage halted development for multiple days in a row. Obviously, this was disruptive to the developers who had landed a change, to the other developers who were waiting to land a change, and to the Release Engineers in the middle of it all…. With so many people involved, this was expensive to the organization in terms of salary as well as opportunity cost.

If you could do builds twice a day, you only had half-as-many changes to sort through and detangle, so you could more quickly identify and fix build problems. But doing builds more frequently would also be disruptive because everyone had to stop and help manually debug-build-problems twice as often. How to get out of this vicious cycle?

In these desperate times, Netscape RelEng built a system that grabbed the latest source code, generated a build, displayed the results in a simple linear time-sorted format on a webpage where everyone could see status, and then start again… grab the latest source code, build, post status… again. And again. And again. Not just once a day. At first, this was triggered every hour, hence the phrase “hourly build”, but that was quickly changed to starting a new build immediately after finishing the previous build.

All with no human intervention.

By integrating all the checkins and building continuously like this throughout the day, it meant that each individual build contained fewer changes to detangle if problems arose. By sharing the results on a company-wide-visible webserver, it meant that any developer (not just the few Release Engineers) could now help detangle build problems.

What do you call a new system that continuously integrates code checkins? Hmmm… how about “a continuous integration server“?! Good builds were colored “green”. The vertical columns of green reminded people of trees, giving rise to the phrase “the tree is green” when all builds looked good and it was safe for developers to land checkins. Bad builds were colored “red”, and gave rise to “the tree is burning” or “the tree is closed”. As builds would break (or “burn” with flames) with seemingly little provocation, the web-based system for displaying all this was called “tinderbox“.

Pretty amazing stuff in 1997, and a pivotal moment for Netscape developers. When Netscape moved to open source Mozilla, all this infrastructure was exposed to the entire industry and the idea spread quickly. This remains a core underlying principle in all the various continuous integration products, and agile / scrum development methodologies in use today. Most people starting a software project in 2014 would first setup a continuous integration system. But in 1997, this was unheard of and simply brilliant.

(From talking to people who were there 17 years ago, there’s some debate about whether this was originally invented at Netscape or inspired by a similar system at SGI that was hardwired into the building’s public announcement system using a synthesized voice to declare: “THE BUILD IS BROKEN. BRENDAN BROKE THE BUILD.” If anyone reading this has additional info, please let me know and I’ll update this post.)


If tinderbox server is so awesome, and worked so well for 17 years, why turn it off? Why not just fix it up and keep it running?

In mid-2007, an important criteria for the reborn Mozilla RelEng group was to significantly scale up Mozilla’s developer infrastructure – not just incrementally, but by orders of magnitude. This was essential if Mozilla was to hire more developers, gather many more community members, tackle a bunch of major initiatives, ship releases more predictably and to have these new additional Mozilla’s developers and community contributors be able to work effectively. When we analyzed how tinderbox worked, we discovered a few assumptions from 1997 no longer applied, and were causing bottlenecks we needed to solve.


1) Need to run multiple jobs-of-the-same-type at a time
2) Build-on-checkin, not build-continuously.
3) Display build results arranged by developer checkin not by time.


1) Need to run multiple jobs-of-the-same-type at a time
The design of this tinderbox waterfall assumed that you only had one job of a given type in progress at a time. For example, one linux32 opt build had to finish before the next linux32 opt build could start.

Mechanically, this was done by having only one machine dedicated to doing linux opt builds, and that one machine could only generate one build at a time. The results from one machine were displayed in one time-sorted column on the website page. If you wanted an additional different type of build, say linux32 debug builds, you needed another dedicate machine displaying results in another dedicated column.

For a small (~15?) number of checkins per day, and a small number of types of builds, this approach works fine. However, when you increase the checkins per day, many “hourly” build has almost as many checkins as Netscape had each day in 1997. By 2007, Mozilla was routinely struggling with multi-hour blockages as developers debugged integration failures.

Instead of having only one machine do linux32 opt builds at a time, we setup a pool of identically configured machines, each able to do a build-per-checkin, even while the previous build was still in progress. In peak load situations, we might still get more-then-one-checkin-per-build, but now we could start the 2nd linux32 opt build, even while the 1st linux32 opt build was still in progress. This got us back to having very small number of checkins, ideally only one checkin, per build… identifying which checkin broke the build, and hence fixing that build, was once again quick and easy.

Another related problem here was that there were ~86 different types of machines, each dedicated to running different types of jobs, on their own OS and each reporting to different dedicated columns on the tinderbox. There was a linux32 opt builder, a linux32 debug builder, a win32 opt builder, etc. This design had two important drawbacks.

Each different type of build took different times to complete. Even if all jobs started at the same time on day1, the continuous looping of jobs of different durations meant that after a while, all the jobs were starting/stopping at different times – which made it hard for a human to look across all the time-sorted waterfall columns to determine if a particular checkin had caused a given problem. Even getting all 86 columns to fit on a screen was a problem.

It also made each of these 86 machines a single point of failure to the entire system, a model which clearly would not scale. Building out pools of identical machines from 86 machines to ~5,500 machines allowed us to generate multiple jobs-of-the-same-type at the same time. It also meant that whenever one of these set-of-identical machines failed, it was not a single point of failure, and did not immediately close the tree, because another identically-configured machine was available to handle that type of work. This allowed people time to correctly diagnose and repair the machine properly before returning it to production, instead of being under time-pressure to find the quickest way to band-aid the machine back to life so the tree could reopen, only to have the machine fail again later when the band-aid repair failed.

All great, but fixing that uncovered the next hidden assumption.


2) Build-per-checkin, not build-continuously.

The “grab latest source code, generated a build, displayed the results” loop of tinderbox never looked to check if anything had actually changed. Tinderbox just started another build – even if nothing had changed.

Having only one machine available to do a given job meant that machine was constantly busy, so this assumption was not initially obvious. And given that the machine was on anyway, what harm in having it doing an unnecessary build or two?

Generating extra builds, even when nothing had changed, complicated the manual what-change-broke-the-build debugging work. It also meant introduced delays when a human actually did a checkin, as a build containing that checkin could only start after the unneccessary-nothing-changed-build-in-progress completed.

Finally, when we changed to having multiple machines run jobs concurrently, having the machines build even when there was no checkin made no sense. We needed to make sure each machine only started building when a new checkin happened, and there was something new to build. This turned into a separate project to build out an enhanced job scheduler system and machine-tracking system which could span multiple 4 physical colos, 3 amazon regions, assign jobs to the appropriate machines, take sick/dead machines out of production, add new machines into rotation, etc.


3) Display build results arranged by developer checkin not by time.

Tinderbox sorted results by time, specifically job-start-time and job-end-time. However, developers typically care about the results of their checkin, and sometimes the results of the checkin that landed just before them.

Further: Once we started generating multiple-jobs-of-the-same-type concurrently, it uncovered another hidden assumption. The design of this cascading waterfall assumed that you only had one build of a given type running at a time; the waterfall display was not designed to show the results of two linux32 opt builds that were run concurrently. As a transition, we hacked our new replacement systems to send tinderbox-server-compatible status for each concurrent builds to the tinderbox server… more observant developers would occasionally see some race-condition bugs with how these concurrent builds were displayed on the one column of the waterfall. These intermittent display bugs were confusing, hard to debug, but usually self corrected.

As we supported more OS, more build-types-per-OS and started to run unittests and perf-tests per platform, it quickly became more and more complex to figure out whether a given change had caused a problem across all the time-sorted-columns on the waterfall display. Complaints about the width of the waterfall not fitting on developers monitors were widespread. Running more and more of these jobs concurrently make deciphering the waterfall even more complex.

Finding a way to collect all the results related to a specific developer’s checkin, and display these results in a meaningful way was crucial. We tried a few ideas, but a community member (Markus Stange) surprised us all by building a prototype server that everyone instantly loved. This new server was called “tbpl”, because it scraped the TinderBox server Push Logs to gather its data.

Over time, there’s been improvements to tbpl.mozilla.org to allow sheriffs to “star” known failures, link to self-service APIs, link to the commits in the repo, link to bugs and most importantly gather all the per-checkin information directly from the buildbot scheduling database we use to schedule and keep track of job status… eliminating the intermittent race-condition bugs when scraping HTML page on tinderbox server. All great, but the user interface has remained basically the same since the first prototype by Markus – developers can easily and quickly see if a developer checkin has caused any bustage.


Fixing these 3 root assumptions in tinderbox.m.o code would be “non-trivial” – basically a re-write – so we instead focused on gracefully transitioning off tinderbox. Since Sept2012, all Mozilla RelEng systems have been off tinderbox.m.o and using tbpl.m.o plus buildbot instead.

Making the Continuous Integration process more efficient has allowed Mozilla to hire more developers who can do more checkins, transition developers from all-on-one-tip-development to multi-project-branch-development, and change the organization from traditional releases to rapid-release model. Game changing stuff. Since 2007, Mozilla has grown the number of employee engineers by a factor of 8, while the number of checkins that developers did has increased by a factor of 21. Infrastructure improvements have outpaced hiring!

On 16 May 2014, with the last Mozilla project finally migrated off tinderbox, so the tinderbox server was powered off. Tinderbox was the first of its kind, and helped changed how the software industry developed software. As much as we can gripe about tinderbox server’s various weaknesses, it has carried Mozilla from 1997 until 2012, and spawned an industry of products that help developers ship better software. Given it’s impact, it feels like we should look for a pedestal to put this on, with a small plaque that says “This changed how software companies develop software, thank you Tinderbox”… As it has been a VM for several years now, maybe this blog post counts as a virtual pedestal?! Regardless, if you are a software developer, and you ever meet any of the original team who built tinderbox, please do thank them.

I’d like to give thanks to some original Netscape folks (Tara Hernandez, Terry Weissman, Lloyd Tabb, Leaf, jwz) as well as aki, brendan, bmoss, chofmann, dmose, myk and rhelmer for their help researching the origins of Tinderbox. Also, thank you to lxt, catlee, bhearsum, rail and others for inviting me back to attend the ceremonial final-powering-off event… After the years of work leading up to this moment, it meant a lot to me to be there at the very end.

John.

ps: the curious can view the cvs commit history for tinderbox webpage here (My favorite is v1.10!) …and the cvs commit history for tinderbox server here (UPDATE: Thanks to @justdave for the additional link.)

pps: When a server has been running for so long, figuring out what other undocumented systems might break when tinderbox is turned off is tricky. Here’s my “upcoming end-of-life” post from 02-apr-2013 when we thought we were nearly done. Surprise dependencies delayed this shutdown several times and frequently uncovered new, non-trivial, projects that had to be migrated. You can see the various loose ends that had to be tracked down in bug#843383, and all the many many linked bugs.

ppps: Here’s what MozillaDeveloperNetwork and Wikipedia have to say about Tinderbox server.

(UPDATE: add links to wikipedia, MDN, and to fix some typos. joduinn 28jun2014)

29 Comments (+add yours?)

  1. @brianking
    04 Jun 2014 @ 11:14:24

    Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server http://t.co/Mps6Seoi4e

    Reply

  2. @lmorchard
    04 Jun 2014 @ 13:13:11

    Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server http://t.co/H9R3zGtEby

    Reply

  3. @gvwilson
    04 Jun 2014 @ 13:52:37

    We need more articles like “@lmorchard: Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server http://t.co/g3QIfnAchD”

    Reply

  4. @DaveSchinkel
    04 Jun 2014 @ 23:38:56

    Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server http://t.co/fMd0RFBMzD

    Reply

  5. Jeff Walden
    05 Jun 2014 @ 01:33:22

    Mm, such good history.

    I remember back when Tinderbox would fit on a single screen, width-wise. And you didn’t have those gray boxes for build-machine inactivity gaps between builds — a quick hack to the fork of Tinderbox code, that took on a life of its own and substantially interfered with ever re-merging the two.

    I later remember when Tinderbox wouldn’t fit on a single screen, width-wise. Even zoomed out as far as Firefox would allow, on a ~2500×1600 30″ monster screen (for the time), it still wouldn’t fit. You had to scroll horizontally to see everything. And good luck seeing much in there but green or orange or red. Maybe if you were really precise you could click a build log link, but it’d be just as easy to click the C for the list of changes in Bonsai. (That’s another demerit of by-time as opposed to by-checkin, coupled with per-file versioning, that you didn’t mention: the occasional non-atomic Tinderbox build that would pick up only part of a commit. But it’s tangential to the story you’re relating, so understandably not mentioned.)

    tbpl was such a breath of fresh air in contrast. And yet, I totally get the defense of tinderbox. Indeed, I don’t even really understand why it even requires defense. My first experience with software development at all was free-time hacking in high school on Mozilla stuff. (For a loose definition of “hacking”, that contemplated pushing patches but not too much writing code.) That you’d push and get feedback on the correctness of a patch, always seemed incredibly intuitive to me. How else could it be? Likewise for code reviews. How many times did it happen that my proposed change would become better, less buggy, or even non-buggy, because of review feedback? (And not even just for code changes, but for documentation changes, where I initially started contributing.) Likewise for the value and utility of version control systems. Somewhat ironically, my first exposure to the absence of CI and code reviews was in a software engineering class in college — the sort of place you’d think would emphasize those sorts of fundamentals. (Then again, “academia”, perhaps. :-) )

    I’m not sure when I first learned that continuous integration, even in the somewhat unpolished and occasionally faltering way Tinderbox implemented it, was a novel idea, not the industry norm. Same for code reviews. It boggled my mind. These systems seemed so natural, how could they not be used everywhere? (At least, for sufficiently large systems. Which doubtless Mozilla is, and which doubtless colors my views, but even systems a fraction of the size of Mozilla at the time present enough complexity for both to be valuable.)

    I am reminded of a somewhat cornily-named concept from the intro AI class at MIT. Each lecture Patrick Winston would at some point present a “gold-star idea”: a valuable concept expressed in a sentence or two. (I should try to find my notes at some point and pick out those ideas again. I bet they’d speak with even greater impact now.) I don’t explicitly remember most of them any more (with one notable exception being the idea that “names give you power”, to which I have referred elsewhere online). But I do remember the introduction to the concept being that great ideas don’t require some flash of non-obvious insight or inspiration. Sometimes even an “obvious” idea can be a truly great one, if its implications are sufficiently useful and profound. I think that must apply to continuous integration. Simple and obvious when you’re used to it, yet maybe far from obvious when you haven’t observed it in practice. Or at least that’s how I imagine it must have worked, for continuous integration not to have taken off well before Netscape/Mozilla’s use of it.

    Tinderbox was a mess. But it was also a really great mess for its day. I can’t imagine working on a software project like Mozilla without its influence.

    Reply

  6. John Kelleher
    05 Jun 2014 @ 01:42:44

    Nostalgia is underrated – thanks for the perspective. J

    Reply

  7. @makoto_kato
    05 Jun 2014 @ 08:46:57

    Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server http://t.co/8VjpyIaIPU

    Reply

  8. @prosynk
    05 Jun 2014 @ 16:02:17

    “THE BUILD IS BROKEN. BRENDAN BROKE THE BUILD.” http://t.co/0DPRU5qzNT

    Reply

  9. @sayrer
    06 Jun 2014 @ 09:59:49

    Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server http://t.co/uaqGehiuxJ

    Reply

  10. Saurabh Anand (@sawrubh)
    16 Jun 2014 @ 21:24:24

    Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server via @joduinn http://t.co/PEtaDYIaAD

    Reply

  11. Dave Miller
    17 Jun 2014 @ 19:14:10

    Just a nit to point out, your link to “the commit history for the tinderbox server” actually is the commit history of the tinderbox project web page and not the server itself.

    The actual server commit history is at http://bonsai.mozilla.org/cvsqueryform.cgi?cvsroot=/cvsroot&module=MozillaTinderboxAll

    Although I can’t get it to load because I think there’s just too much history there. Looks like it’ll load decently in two-year chunks… here’s the first two years: http://bonsai.mozilla.org/cvsquery.cgi?treeid=default&module=Tinderbox&branch=HEAD&branchtype=match&dir=&file=&filetype=match&who=&whotype=match&sortby=Date&hours=2&date=explicit&mindate=1998-03-27&maxdate=2000-01-01&cvsroot=%2Fcvsroot

    Reply

    • John
      27 Jun 2014 @ 09:34:41

      hi Justdave;

      Cool find Dave, I didnt know about that. Thanks! Link added to post.

      John.

      Reply

  12. @jvaleski
    26 Jun 2014 @ 14:22:19

    this code defined how I wrote code. Farewell to Tinderbox [...] Continuous Integration server via @joduinn http://t.co/B6M2HGsV5s

    Reply

  13. David Karlton
    27 Jun 2014 @ 05:30:13

    Tinderbox was after my time. In the old days, we had the daily build lemons, courtesy of Tara. I think I only broke the build once. That was enough.

    Reply

  14. @drawohara
    27 Jun 2014 @ 09:00:06

    the origins of CI http://t.co/RThXas1SEG

    Reply

  15. @pberry
    27 Jun 2014 @ 19:55:19

    Did @rands ever break the build? farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server http://t.co/tbc5Q8oHzr

    Reply

  16. @ralphbod
    27 Jun 2014 @ 21:37:02

    “More rare, but not unheard of, was that the build bustage halted development for multiple days in a row.” http://t.co/azCPcvOOoy

    Reply

  17. @andreaja
    28 Jun 2014 @ 00:37:00

    “All with no human intervention.” – http://t.co/oQlvRfPXAN

    Reply

  18. @garrybodsworth
    28 Jun 2014 @ 01:49:58

    Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server http://t.co/qyV37gcD4l

    Reply

  19. @fanf
    28 Jun 2014 @ 03:39:58

    http://t.co/VrvTLxEkbH – Farewell to Tinderbox, the world’s 1st? 2nd? continuous integration server.

    Reply

  20. @TheSeg
    28 Jun 2014 @ 18:46:05

    “Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server” via @digg http://t.co/3fVFiy0Ywn

    Reply

  21. @acdha
    29 Jun 2014 @ 15:43:41

    “Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server” http://t.co/HDbidkDb45

    Reply

  22. @lizhenry
    29 Jun 2014 @ 18:51:16

    Fabulous post on the evolution of the Netscape/Mozilla build and release process & continuous integration http://t.co/5ep5V0NctJ

    Reply

  23. Michael Kaply
    30 Jun 2014 @ 06:14:44

    I’ll throw in a couple memories.

    1. I remember my first checkin that set the tinderboxes aflame back in my “IBM at Netscape” days. I thought I had made a change that made defining XP_WIN not necessary. So I removed it from the build defines. Wreaked havoc. When I came in the next day, things were pretty bad. But folks were pretty cool about it. But I felt horrible.

    2. One of the cool features of tinderbox was that build logs could be emailed to add new machines to the tinderbox. The OS/2 build machine first lived at IBM and then lived at my house, yet still showed up on tinderbox. Although back then, sending large files through email was next to impossible (even gzipped), so by the time the emails reached tinderbox and were parsed, it overlapped the previous version, so things didn’t really work. We ended up just using a script to send just the beginning and end of build log since those were the only important parts.

    Reply

  24. @EstrellaGustavo
    01 Jul 2014 @ 06:05:02

    Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server http://t.co/VRpI8WynEO

    Reply

  25. @brianbehlendorf
    02 Jul 2014 @ 11:41:22

    Farewell to Mozilla’s Tinderbox, the first continuous integration server. http://t.co/nCL8HpcXok

    Reply

  26. @ywxwy
    05 Jul 2014 @ 11:47:33

    In May 2014 Mozilla turned off what is possibly the first Continuous Integration server in history, after 17 years: http://t.co/nXHK5S1jfI

    Reply

  27. @akuchling
    20 Jul 2014 @ 17:18:23

    From June: Farewell to Tinderbox, the world’s 1st? 2nd? Continuous Integration server http://t.co/enGlmJlOeu

    Reply

  28. Rob (@sayrer)
    06 Aug 2014 @ 19:39:17

    @slyphon in the beginning, there was tinderbox http://t.co/uaqGehiuxJ

    Reply

Leave a Reply