Infrastructure load for March 2013

  • #checkins-per-month: We had 6,433 checkins in March 2013. This is well past our previous record of 6,247 in Jan2013. Every working day was consistently busy (>200 checkins per working day) and load-per-day was busy across longer periods of each day.


  • #checkins-per-day: On 18mar, we had 323 checkins – a new record for a single day, breaking our previous record of 307 checkins-per-day on 06jan2013. During March, 20-of-31 days had over 200 checkins-per-day – thats every working day except 28mar (because of Easter weekend?). 13-of-31 days had over 250 checkins-per-day (3-of-31 days had over 300 checkins-per-day!).
  • #checkins-per-hour: Checkins are still mostly mid-day PT/afternoon ET, but the load has increased across the day. For 9 of every 24 hours, we sustained over 10 checkins per hour, the heaviest sustained use we’ve seen so far across our day. Heaviest load times this month were 2-3pm PT (13.22 checkins-per-hour).
  • As usual, our build pool handled the load well, with >95% of all builds consistently being started within 15mins.

    Our test pool situation continues to improve, as we continue migrating any test jobs that do not *require* hardware to AWS. As before, any test suite which we can run on AWS means double goodness: the AWS-based test suites have great wait times on AWS, and the remaining physical-hardware-based test suites have slightly improved wait times because fewer jobs are being scheduled on our scarce hardware. Even so, its not yet as great as the situation with our builders. For the tests that *do* require hardware, it continues to be a slow process to bring those additional physical machines online. Meanwhile, RelEng, ATeam and devs continue the work of finding test suites which should (in theory!) be able to run on AWS, then fixing them to make them run green. Once a test suite runs green on AWS, RelEng stops scheduling that test suite on physical machines.

    If you know of any test suites that no longer need to be run per-checkin, please let us know so we can immediately reduce the load a little. Also, if you know of any test suites which are perma-orange, and hidden on tbpl.m.o, please let us know – thats the worst of both worlds – using up scarce CPU time and not being displayed. Every little helps put scarce test CPU to better use.

mozilla-inbound, mozilla-central, fx-team:
Ratios of checkins across these branches remain fairly consistent. mozilla-inbound continues to be heavily used as an integration branch, with 27.9%% of all checkins, consistently far more then the other integration branches combined. As usual, fx-team has ~1% of checkins, mozilla-central has 1.6% of checkins.

The lure of sheriff assistance on mozilla-inbound continues to be consistently popular, and as usual, very few people land directly on mozilla-central these days.

mozilla-aurora, mozilla-beta, mozilla-b2g18, gaia-central:
Of our total monthly checkins:

  • 2.4% landed into mozilla-aurora, very similar to last month.
  • 1.6% landed into mozilla-beta, very similar to last month.
  • 1.5% landed into mozilla-b2g18, very similar to last month.
  • 4.8% landed into gaia-central, slightly higher then last month. gaia-central continues to be the third busiest branch overall, after try and mozilla-inbound. Obviously, these checkins are *only* for the B2G releases, so worth calling out here.

misc other details:

  • Pushes per day
    • You can clearly see weekends through the month. Its worth noting that we had >200 checkins-per-day every working day in March except 28mar (because of Easter weekend?).

    • Pushes by hour of day
        Mid-morning PT is consistently the biggest spike of checkins, although this month the checkin load stayed high throughout the entire PT working day, and particularly spiked between 2-3pm PT, with 13.22 checkins-per-hour.

Leave a Reply