Infrastructure load for January 2011

Summary:

Interesting!! We had 2,636 pushes in January 2011. This is a significant jump from the last few months, and almost hit our previous record (2,707 pushes in August 2010). Also interesting that a few branches were really busy but most branches had zero checkins.

Overall load since Jan 2009

Infrastructure load by branch

Details:

  • Shipping Fennec4.0beta4, Firefox4.0beta9, Firefox4.0beta10 and now Firefox4.0beta11 in quick succession, and with very short lockdowns, seemed to help unjam checkins backlog this month. A great relief for everyone!
  • This faster cadence seems to have helped focus efforts, with less need for working on a project branch while waiting for a clear time to land in m-c. Also, as we get closer to the actual shipping of Firefox 4.0, it feels like most of the bigger pieces are done, and the remaining fixes still landing are each smaller fixes, which do not need a project branch, and can be done on tryserver. Of course, that is just my interpretations… if you have other interpretations of the same data, let me know!
  • The load on TryServer jumped to 53% of our overall load. Looks like more people are now doing TryServer run before landing, which means the patches that do land are less-risky, and a tree that stays green more often!
  • The numbers for this month are:
    • 2,636 code changes to our mercurial-based repos, which triggered 335,210 jobs:
    • 49,971 build jobs, or ~67 jobs per hour.
    • 158,121 unittest jobs, or ~213 jobs per hour.
    • 127,118 talos jobs, or ~171 talos jobs per hour.
  • We are still double-running unittests for some OS; running unittest-on-builder and also unittest-on-tester. This continues while developers and QA work through the issues. Whenever unittest-on-test-machine is live and green, we disable unittest-on-builders to reduce wait times for builds. Any help with these tests would be great!
  • The entire series of these infrastructure load blogposts can be found here.
  • We are still not tracking down any l10n repacks, nightly builds, release builds or any “idle-timer” builds.

Detailed breakdown is :
#Pushes this month

#Pushes per hour

Here’s how the math works out (Descriptions of build, unittest and performance jobs triggered by each individual push are here:
the math behind the graphs

3 thoughts on “Infrastructure load for January 2011”

  1. Time to retire that bullet point about double-running on builders and testers – that hasn’t been true (on the trunk, the only place where we’re going to switch) since early in January, when armenzg finally slew his last dragon.

    1. hi Phil;

      Yes, Armen has slain most of those double-run-unittest dragons at this point. There’s still a few cases where this still happens, but its almost all done now – I’ll be very happy if I can remove this bullet next month. Either way, I do not include the double-runs into the math above, I only count once, so the numbers reported here *under* count the amount of jobs actually being processed.

      (Also, thanks for reading – I sometimes wonder how many people care about all this data, and comments like yours encourage me that people find them useful. As always, if you have ideas/suggestions of any other data you might like shared, lor how I can improve these posts, et me know!)

      thanks
      John.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.