On Leaving Mozilla

14 Comments

tl;dr: On 18nov, I gave my notice to Brendan and Bob that I will be leaving Mozilla, and sent an email internally at Mozilla on 26nov. I’m here until 31dec2013. Thats a lot of notice, yet feels right – its important to me that this is a smooth stable transition.

After they got over the shock, the RelEng team is stepping up wonderfully. Its great to see them all pitching in, sharing out the workload. They will do well. Obviously, at times like this, there are lots of details to transition, so please be patient and understanding with catlee, coop, hwine and bmoss. I have high confidence this transition will continue to go smoothly.

In writing this post, I realized I’ve been here 6.5 years, so thought people might find the following changes interesting:

1) How quickly can Mozilla ship a zero-day security release?
was: 4-6 weeks
now: 11 hours

2) How long to ship a “new feature” release?
was: 12-18 months
now: 12 weeks

3) How many checkins per day?
was: ~15 per day
now: 350-400 per day (peak 443 per day)

4) Mozilla hired more developers
increased number of developers x8
increased number of checkins x21
The point here being that the infrastructure improved faster then Mozilla could hire developers.

5) Mozilla added mobile+b2g:
was: desktop only
now: desktop + mobile + phoneOS – many of which ship from the *exact* same changeset

6) updated tools
was: cvs
now: hg *and* git (aside, I don’t know any other organization that ships product from two *different* source-code revision systems.)

7) Lifespan of human Release Engineers
was 6-12 months
now: two-losses-in-6-years (3 including me)
This team stability allowed people to focus on larger, longer term, improvements – something new hires generally cant do while learning how to keep the lights on.

This is the best infrastructure and team in the software industry that I know of – if anyone reading this knows of better, please introduce me! (Disclaimer: there’s a big difference between people who update website(s) vs people who ship software that gets installed on desktop or mobile clients… or even entire phoneOS!)

Literally, Release Engineering is a force multiplier for Mozilla – this infrastructure allows us to work with, and compete against, much bigger companies. As a organization, we now have business opportunities that were previously just not possible.

Finally, I want to say thanks:

  • I’ve been here longer then a few of my bosses. Thanks to bmoss for his council and support over the last couple of years.
  • Thanks to Debbie Cohen for making LEAD happen – causing organizational change is big and scary, I know its impacted many of us here, including me.
  • Thanks to John Lilly and Mike Schroepfer (“schrep”) – for allowing me to prove there was another, better, way to ship software. Never mind that it hadn’t been done before. And thanks to aki, armenzg, bhearsum, catlee, coop, hwine, jhopkins, jlund, joey, jwood, kmoir, mgerva, mshal, nthomas, pmoore, rail, sbruno, for building it, even when it sounded crazy or hadn’t been done before.
  • Finally, thanks to Brendan Eich, Mitchell Baker, and Mozilla – for making the “people’s browser” a reality… putting humans first. Mozilla ships all 90+ locales, even Khmer, all OS, same code, same fixes… all at the same time… because we believe all humans are equal. It’s a living example of the “we work for mankind, not for the man” mindset here, and is something I remain super proud to have been a part of.

take care
John.

Infrastructure load for November 2013

3 Comments

  • We’re back to typical load again in November.
  • #checkins-per-month: We had 7,601 checkins in November 2013. This is our 2nd heaviest load on record, and is back at expected range. For the curious, our heaviest month on record was in August 2013 (7,771 checkins) and our previous 2nd heaviest month was September2013 (7,580 checkins).

    Overall load since Jan 2009

  • #checkins-per-day:Overall load was consistently high throughout the month, with a slight dip for US Thanksgiving. In November, 18-of-30 days had over 250 checkins-per-day, 13-of-30 days had over 300 checkins-per-day, and 1-of-30 days had over 400 checkins-per-day.
    #Pushes this month Our heaviest day had 431 checkins on 18nov; close to our single-day record of 443 checkins on 26aug2013.
  • #checkins-per-hour: Checkins are still mostly mid-day PT/afternoon ET. For 10 of every 24 hours, we sustained over 11 checkins per hour. Our heaviest load time this month was 11am-12noon PT 15.6 checkins-per-hour (a checkin every 3.8 minutes!) – slightly below our record of 15.73 checkins-per-hour.
    #Pushes per hour

mozilla-inbound, b2g-inbound, fx-team:

  • mozilla-inbound had 16.6% of all checkins. This continues to be heavily used as an integration branch. As developers use other *-inbound branches, the use of mozilla-inbound has reduced over recent months, and is stabilizing around mid-teens of overall usage.
  • b2g-inbound had 11.5% of all checkins. This continues to be a successful integration branch, with usage slightly increased over last month’s 10.3% and a sign that usage of this branch is also stabilizing.
  • fx-team had 6% of all checkins. This continues to be a very active third integration branch for developers. Usage is almost identical to last month, and shows that usage of this branch is also stabilizing.
  • The combined total of these 3 integration branches is 34.1% , which is slightly higher then last month yet fairly consistent. Put another way, sheriff moderated branches consistently handle approx 1/3 of all checkins (while Try handles approx 1/2 of all checkins). The use of multiple *-inbounds is clearly helping improve bottlenecks (see pie chart below) and the congestion on mozilla-inbound is being reduced significantly as people use switch to using other *-inbound branches instead. Overall, this configuration reduces stress and backlog headaches on sheriffs and developers, which is good. All very cool to see working at scale like this.

    Infrastructure load by branch

mozilla-aurora, mozilla-beta, mozilla-b2g18, gaia-central:
Of our total monthly checkins:

  • 2.6% landed into mozilla-central, slightly lower than last month. As usual, most people land on sheriff-assisted branches instead of landing directly on mozilla-central.
  • 1.4% landed into mozilla-aurora, lower then last month’s abnormally high load. This is consistent with the B2G branching, which had B2G v1.2 checkins landing on mozilla-aurora, and now moved to mozilla-b2g26_v1_2.
  • 0.9% landed into mozilla-beta, slightly higher than last month.
  • 0.0% landed into mozilla-b2g18, slightly lower then last month. This dropped to almost zero (total of 8 checkins) as we move B2G to gecko26.
  • 3.3% landed into mozilla-b2g26_v1_2, as part of the B2Gv1.2 branching involving Firefox25. As predicted this is significantly more then last month, and is expected to continue until we move focus to B2G v1.3 on gecko28.
  • Note: gaia-central, and all other gaia-* branches, are not counted here anymore. For details, see here.

misc other details:
As usual, our build pool handled the load well, with >95% of all builds consistently being started within 15mins. Our test pool is getting up to par and we’re seeing more test jobs being handled with better response times. Trimming out obsolete builds and tests continues. As always, if you know of any test suites that no longer need to be run per-checkin, please let us know so we can immediately reduce the load a little. Also, if you know of any test suites which are perma-orange, and hidden on tbpl.m.o, please let us know – those are the worst of both worlds – using up scarce CPU time *and* not being displayed for people to make use of. We’ll make sure to file bugs to get tests fixed – or disabled – every little bit helps put scarce test CPU to better use.

RelEng group gathering in Boston

No Comments

Last week, 18-22 November, RelEng gathered in Boston. As usual for these work weeks, it was jam-packed; there was group planning, and lots of group sprints – coop took on the task of blogging with details for each specific day (Mon and Mon, Tue, Wed, Thu, Fri). The meetings with Bocoup were a happy, unplanned, surprise.

Given the very distributed nature of the group, and the high-stress nature of the job, a big part of the week is making sure we maintain our group cohesion so we can work well together under pressure after we return to our respective homes. When all together in person, the trust, respect, love for each other is self-evident and something I’m truly in awe of. I dont know how else to describe this except “magic” – this is super important to me, and something I’m honored to be a part of.

Every gathering needs a group photo, and these are never first-shot-good-enough-ship-it, so while aki was taking a group photo, Massimo quietly setup his gopro to timelapse the fun.

kmoir ship-it shirt

This is Mozilla’s Release Engineering group – aki, armenzg, bhearsum, callek, catlee, coop, hwine, joey, jhopkins, jlund, joduinn, kmoir, mgerva, mshal, nthomas, pmoore, simone, rail. All proudly wearing our “Ship it” shirts.

Every RelEng work week is always an exhausting hectic week, and yet, at the end of each week, as we are saying our goodbyes and heading for various planes/cars/homes, I find myself missing everyone deeply and feeling so so so proud of them all.

Proposed changes to RelEng’s OSX build and test infrastructure

2 Comments

tl;dr: In order to improve our osx10.6 test capacity and to quickly start osx10.9 testing, we’re planning to make the following changes to our OSX-build-and-test-infrastructure.

1) convert all 10.7 test machines as 10.6 test machines in order to increase our 10.6 capacity. Details in bug#942299.
2) convert all 10.8 test machines as 10.9 test machines.
3) do most 10.7 builds as osx-cross-compiling-on-linux-on-AWS, repurpose 10.7 builder machines to be additional 10.9 test machines. This cross-compiler work is ongoing, it will take time to complete, and it will take time to transition into production, hence, it is listed last in this list. The curious can follow bug#921040.

Each of these items are large stand-alone projects involving the same people across multiple groups, so we’ll roll each out in the aforementioned sequence.

Additional details:
1) Removing specific versions of an OS from our continuous integration systems based on vendor support and/or usage data is not a new policy. We have done this several times in the past. For example, we have dropped WinXPsp0/sp1/sp2 for WinXPsp3; dropped WinVista for Win7; dropped Win7 x64 for Win8 x64; and soon we will drop Win8.0 for Win8.1; …
** Note for the record that this does *NOT* mean that Mozilla is dropping support for osx10.7 or 10.8; it just means we think *automated* testing on 10.6,10.9 is more beneficial.

2) To see Firefox’s minimum OS requirements see: https://www.mozilla.org/en-US/firefox/25.0.1/system-requirements

3) Apple is offering osx10.9 as a free upgrade to all users of osx10.7 and osx10.8. Also, note that 10.9 runs on any machine that can run 10.7 or 10.8. Because the osx10.9 release is a free upgrade, users are quickly upgrading. We are seeing a drop in both 10.7 and 10.8 users and in just a month since the 10.9 release, we already have more 10.9 users than 10.8 users.

4) Distribution of Firefox users from the most to the least (data from 15-nov-2013):
10.6 – 34%
10.7 – 23% – slightly decreasing
10.8 – 21% – notably decreasing
10.9 – 21% – notably increasing
more info: http://armenzg.blogspot.ca/2013/11/re-thinking-our-mac-os-x-continuous.html

5) Apple is no longer providing security updates for 10.7; any user looking for OS security updates will need to upgrade to 10.9. Because OSX10.9 is a free upgrade for 10.8 users, we expect 10.8 to be in similar situation soon.

6) If a developer lands a patch that works on 10.9, but it fails somehow on 10.7 or 10.8, it is unlikely that we would back out the fix, and we would instead tell users to upgrade to 10.9 anyways, for the security fixes.

7) It is no longer possible to buy any more of the 10.6 machines (known as revision 4 minis), as they are long desupported. Recycling 10.7 test machines means that we can continue to support osx10.6 at scale without needing to buy/rack/recalibrate test and performance results.

8) Like all other large OS changes, this change would ride the trains. Most 10.7 and 10.8 test machines would be reimaged when we make these changes live on mozilla-central and try, while we’d leave a few behind. The few remaining would be reimaged at each 6-week train migration.

If we move quickly, this reimaging work can be done by IT before they all get busy with the 650-Castro -> Evelyn move.

For further details, see armen’s blog http://armenzg.blogspot.ca/2013/11/re-thinking-our-mac-os-x-continuous.html. To make sure this is not missed, I’ve cross-posted this to dev.planning, dev.platform and also this blog. If you know of anything we have missed, please reply in the dev.planning thread.

John.

[UPDATED 29-nov-2013 with link to bug#942299, as the 10.7->10.6 portion of this work just completed.]

The financial cost of a checkin (part 1)

10 Comments

This earlier blog post allowed us to do some interesting math. Now, we can mark each different type of job with its cost-per-minute to run, and finally calculate that a checkin costs us at least USD$30.60; the cost was broken out as follows: USD$11.93 for Firefox builds/tests, USD$5.31 for Fennec builds/tests and USD$13.36 for B2G builds/tests.

checkin load desktop

Note:

  • This post assumes that all inhouse build/test systems have zero cost, and are free, which is obviously incorrect. Cshields is working with mmayo to calculate TCO (Total Cost of Ownership) numbers for the different physical machines Mozilla runs in our colos. Once those TCO costs figured out, I can plug them into this grid, and create an updated blogpost, with revised costs. Meanwhile, however, calculating this TCO continues to take time, so for now I’ve intentionally excluded all cost of running on any inhouse machines. They are not “free”, so this is obviously unrealistic, but better then confusing this post with inaccurate data. Put another way, the costs which *are* here are an underreported part of the overall cost.
  • Each AWS region has different prices for instances. The Amazon prices used here are for the regions that RelEng is already using. We already use the two cheapest AWS regions (US-west-2 and US-east-1) for daily production load, and keep a third region on hot-backup just in case we need it.
  • The Amazon prices used here are “OnDemand” prices. For context, Amazon WebServices has 4 different price brackets available, for each different type of machine available:
    ** OnDemand Instance: The most expensive. No need to prepay. Get an instance in your requested region, within a few seconds of asking. Very high reliability – out of the hundreds of instances that RelEng runs daily, we’ve only lost a few instances over the last ~18months. Our OnDemand builders cost us $0.45 per hour, while our OnDemand testers cost us $0.12 per hour.
    ** 1 year Reserved Instance: Pay in advance for 1 year of use, get a discount from OnDemand price. Functionally totally identical to OnDemand, the only change is in billing. Using 1 year Reserved Instances, our builders would cost us $0.25 per hour, while our OnDemand testers cost us $0.07 per hour.
    ** 3 year Reserved Instances: Pay in advance for 3 year of use, get a discount from OnDemand price. Functionally, totally identical to OnDemand, the only change is in billing. Using 3 year Reserved Instances, our builders would cost us $0.20 per hour, while our 3 year Reserved Instance testers cost us $0.05 per hour.
    ** Spot Instances: The cheapest. No need to prepay. Like a live auction, you bid how much you are willing to pay for it, and so long as you are the highest bidder, you’ll get an instance. This price varies throughout the day, depending on what demand other companies place on that AWS region. Unlike the other types above, a spot instance can be deleted out from under you at zero notice, killing your job-in-progress, if someone else bids more then you. This requires additional automation to detect and retrigger the aborted jobs on another instance. Unlike all others, creating spot instance takes anywhere from a few seconds to 25-30mins to get created, so requires additional automation to handle this unpredictibility. The next post will detail the costs when Mozilla RelEng is running with spot instances in production.

Being able to answer “how much did that checkin actually cost Mozilla” has interesting consequences. Cash has a strange cross-cultural effect – it helps focus discussions.

Now we can see the financial cost of running a specific build or test.

Now its easy to see the cold financial saving of speeding up a build, or the cost saving gained by deleting invalid/broken tests.

Now we can determine approximately how much money we expect to save with some cleanup work, and can use that information to decide how much human developer time is worth spending on cleanup/pruning.

Now we can make informed tradeoff decisions between the financial & market value of working on new features and the financial value of cheaper+faster infrastructure.

Now, it is no longer just about emotional, “feel good for doing right” advocacy statements… now each cleanup work has a clear cold hard cash value for us all to see and to help justify the work as a tradeoff against other work.

All in all, its a big, big deal, and we can now ask “Was that all worth at least $30.60 to Mozilla?”.

John.
(ps: Thanks to Anders, catlee and rail for their help with this.)

Infrastructure load for October 2013

6 Comments

  • Overall this month was quieter then usual. I’d guess that this was caused by a combination of fatigue after the September B2G workweek, the October stabilization+lockdown period for B2Gv1.2, and Canadian Thanksgiving. Oh, and of course, Mozilla’s AllHands Summit in early October. Data for November is already higher, back towards more typical numbers. A big win was turning off obsolete builds and tests which reduced our load by 20%.
  • #checkins-per-month: We had 6,807 checkins in October 2013. This is ~10% below last month’s 7,580 checkins.

    Overall load since Jan 2009

  • #checkins-per-day:Overall load was down throughout the month. In October, 15-of-31 days had over 250 checkins-per-day, 8-of-31 days had over 300 checkins-per-day. No day in October was over 400 checkins-per-day.
    #Pushes this month Our heaviest day had 344 checkins on 28oct; impressive by most standards, yet well below our single-day record of 443 checkins on 26aug.
  • #checkins-per-hour: Checkins are still mostly mid-day PT/afternoon ET. For 7 of every 24 hours, we sustained over 11 checkins per hour. Our heaviest load time this month was 2pm-3pm PT 12.77 checkins-per-hour (a checkin every 4.7 min) – below our record of 15.73 checkins-per-hour.
    #Pushes per hour

mozilla-inbound, b2g-inbound, fx-team:

  • mozilla-inbound continues to be heavily used as an integration branch. As developers use other *-inbound branches, the use of mozilla-inbound at 15.8% of all checkins is much reduced from typical, and also reduced from last month – which was itself the lowest ever usage of mozilla-inbound. The use of multiple *-inbounds is clearly helping improve bottlenecks (see pie chart below) and the congestion on mozilla-inbound is being reduced significantly as people use switch to using other *-inbound branches instead. This also reduces stress and backlog headaches on sheriffs, which is good. All very cool to see.
  • b2g-inbound continues to be a great success, now up to 10.3% of this month’s checkins landing here, a healthy increase over last month’s 8.8% and further evidence that use of this branch is helping.
  • With sheriff coverage, fx-team is clearly a very active third place for developers, with 5.6% of checkins this month, This is almost identical to last month, and may become the stable point for this branch.
  • The combined total of these 3 integration branches is 30.2%, which is fairly consistent. Put another way, sheriff moderated branches consistently handle approx 1/3 of all checkins (while Try handles approx 1/2 of all checkins).

    Infrastructure load by branch

mozilla-aurora, mozilla-beta, mozilla-b2g18, gaia-central:
Of our total monthly checkins:

  • 2.6% landed into mozilla-central, slightly higher than last month. As usual, very few people land directly on mozilla-central these days, when there are sheriff-assisted branches available instead.
  • 3.2% landed into mozilla-aurora, much higher than usual. I believe this was caused by the B2G branching, which had B2G v1.2 checkins landing on mozilla-aurora.
  • 0.8% landed into mozilla-beta, slightly higher than last month.
  • 0.2% landed into mozilla-b2g18, slightly lower then last month. This should quickly drop to zero as we move B2G to gecko26.
  • 0.4% landed into mozilla-b2g26_v1_2, which was only enabled for checkins as part of the B2Gv1.2 branching involving Firefox25. This should quickly grow in usage until we move focus to B2G v1.3 on gecko28.
  • Note: gaia-central, and all other gaia-* branches, are not counted here anymore. For details, see here.

misc other details:
As usual, our build pool handled the load well, with >95% of all builds consistently being started within 15mins. Our test pool is getting up to par and we’re seeing more test jobs being handled with better response times. Trimming out obsolete builds and tests reduced our load by 20% – or put another way – got us 20% extra “free” capacity. Still more work to be done here, but very encouraging progress. As always, if you know of any test suites that no longer need to be run per-checkin, please let us know so we can immediately reduce the load a little. Also, if you know of any test suites which are perma-orange, and hidden on tbpl.m.o, please let us know – those are the worst of both worlds – using up scarce CPU time *and* not being displayed for people to make use of. We’ll make sure to file bugs to get tests fixed – or disabled – every little bit helps put scarce test CPU to better use.

[UPDATE: added mention of Mozilla Summit in first paragraph. Thanks to coop for catching that omission! joduinn 12nov2013.]

Now saving 47 compute hours per checkin!

3 Comments

While researching this “better display for compute hours per checkin” post , I noticed that we now “only” consume 207 compute hours of builds and tests per checkin. A month ago, we handled 254 compute-hours-per-checkin, so this is a reduction of 47 compute-hours-per-checkin.

No “magic silver bullet” here, just people quietly doing detailed unglamorous work finding, confirming and turning off no-longer-needed-jobs. For me, the biggest gains were turning off “talos dirtypaint” and “talos rafx” across all desktop OS, a range of b2g device builds, all Android no-ionmonkey builds and tests, and a range of Android armv6, armv7 builds and tests. At Mozilla’s volume-of-checkins, saving 47 hours-per-checkin is a big big deal.

This reduced our overall load by 23%. Or put another way – this work gave us 23% extra “spare” capacity to better handle the remaining builds and tests that people *do* care about.

Great, great work by sheriffs and RelEng. Thank. You.

How many hours of builds and tests do we run per commit?

  • 207 compute hours = ~8.6 compute *days* (nov2013)
  • 254 compute hours = ~10.5 compute *days* (sep2013)
  • 137 compute hours = ~5.7 compute *days* (aug2012)
  • 110 compute hours = ~4.6 compute *days*(jan2012)
  • ~40 compute hours = ~1.6 compute *days*(2009)

#test jobs in a day

There’s still more goodness to come, as even more jobs continue to be trimmed; the curious can follow bug#784681. Of course, if you see any build/test here which is no longer needed, or is perma-failing-and-hidden on tbpl.mozilla.org, please file a bug linked to bug#784681 and we’ll investigate/disable/fix as appropriate.

Better display for “compute hours per checkin”

2 Comments

After my last post about our compute-load-per-checkin, I received a email that made me sit up and smile. Andershol had “a quick script” that quickly and easily displayed the same information in a gridformat. Not just a suggestion – the actual code that ran, with real output. I found this format super helpful. We’ve refined this a few times now, and I think others would also find this useful, hence this post.

checkin load desktop

  • Each vertical column is the operating system used.

  • Each horizontal row is the job type (which build-type, which test-suite,…).
  • Each white cell is the elapsed time taken by that specific job on that specific operating system, so for example running “mochitest browser chrome” on linux 32bit opt build took 1h:53m:13s.

It is now easy to quickly see the total time spent on a given OS, by looking at the total in the gray column header (for example, Firefox desktop linux 32bit builds and tests took 21h:44m).

Similarly, its easy to see the total time spent on a given job (build/test), across all OS, by looking at the total in the gray row header. (for example, running “mochitest browser chrome” took 4h:54m on opt, 13h:13m on debug, for a total of 18h:07m).

checkin load b2gThe three major products (Firefox-for-desktop, Firefox-for-Android, FirefoxOS) are each shown in their own grid, but its worth noting that the jobs in *each* of the *three* grids are being processed per checkin. The combined total of all three grids is the overall compute load that RelEng is running per checkin.

This display format was super helpful to me, so big thanks to Andershol for making this a reality!

Also, its great to see no-longer-needed builds and testsuites being turned off… reducing load from 254 to 207 hours per checkin. Biggest highlights were turning off “talos dirtypaint” and “talos rafx” across all desktop OS, turning off all Android no-ionmonkey builds and tests, and turning off a range of Android armv6, armv7 builds and tests. At Mozilla’s volume-of-checkins, those savings quickly add up.

Of course, if you notice anything else being run which you think is no longer needed, please file a bug and we’ll take care of it.

John.

checkin load mobileps: Andershol has posted the code to https://github.com/andershol/buildtasks; if you have ideas, or would like to suggest enhancements, he’s happily accepting patches!

“We are all remoties” in Haas, UCBerkeley

1 Comment

[UPDATE: The newest version of this presentation is here. joduinn 12feb2014]

main gate

Last week, I had the distinct privilege of being invited back to present “We are all remoties” in UCBerkeley’s “New Manager Bootcamp” series at Haas.

The auditorium was packed with ~90 people, from a range of different companies and different industries. After my experiences at Mozilla Summit, I started by asking two specific questions:

1) How many of you are remote? (only ~5% of hands went up).
2) How many of you routinely work with people who are not in the same geographical location as yourself (100% of the hands went up!).

I found it interesting that few thought of themselves as “remotie”, yet all were working in geo-distributed teams.

photo of crowded auditorium
This was similar to what came up during the “We are all remoties” sessions at MozillaSummit just a few days before, as well as at other previous “We are all remoties” sessions I’ve done elsewhere. Somehow, physically working in an office tricks some people into believing they don’t need to think of themselves as “remote”, and hence don’t think “We are all remoties” is relevant to them!?

People were fully engaged, asking tons of great questions right from the start, and were clearly excited by practical tips to working more effectively in distributed groups. The organizers planned ahead, and specifically put this session immediately before lunch, so that the Q+A could continue overtime… and a separate crowded room of 15-20 people continued the great back/forth over food.

After lunch, I was part of a 4-person panel, where the class got to set direction and ask all the questions – no holds barred. As the class, and the panelists, all came from different backgrounds, different cultures, different careers, it was no surprise that the Q+A uncovered different perspectives and attitudes. The class were agreeing/disagreeing with each other and with the panelists. We even had panelists asking each other questions?!?! As individual panelists, we didn’t always agree on the mechanics of what we did, but we all agreed on the motivations of *why* we did what we did: doing a good job, while also taking care of the lives and careers of the individuals, the group, and the overall organization.

The trust and honesty in the room was great, and it was quickly evident that everyone was down-to-earth, asking brutally honest questions simply because they wanted to do right with their new roles and responsibilities. Even while being on the spot with some awkward questions, I admired their sincere desire to do well in their new role, and to treat people well. It gave me hope, and I thank them all for that.

Big thanks to Homa and Kim for putting it all together. I found it a great experience, and the lively discussions during+after lead me to believe others did too.

John.
PS: For a PDF copy of the presentation, click on the smiley faces! For the sake of my poor blogsite, the much, much, larger keynote file is available on request.

“We are ALL remoties” at Mozilla Summit

3 Comments

[UPDATE: The newest version of this presentation is here. joduinn 12feb2014]

Last weekend, during Mozilla Summit, “We are all Remoties” was held *4* times: Brussels (catlee), Toronto (Armen and Kadir) and Santa Clara (myself, twice!). Big props to Kadir for joining in with his data – its always great to meet others who are also thinking about to best work together in a growing and geographically-distributed Mozilla.

I was happy to see that these different speakers, in different locations, all covered the session well, in their own personal style, and all had great responses and interactions. From all accounts, people really found this topic helpful, which is very nice to hear.

The one feedback that did surprise me, from all these sessions, was that most of the people attending were already working remotely, yet very few people based in offices attended, even if their entire group was geo-distributed. The topics covered addressed people in offices too, and several times people who were remoties said to me that they wished their office-based-co-workers had attended.

Its possible that the title makes people think the session only applies to non-office-based people. One earlier title I had was “working effectively in geo-distributed teams”, but that sounded very PHB. Another title (“If you are a remotie, or if you are in an office, working with a remotie…”) was too long, but it brought me to the current title. If everyone who is on a geo-distributed team considered themselves all to be on the same level playing field, then “we are ALL remoties!”.

Spreading the word, including to more people in physical offices, is important to make everyone’s work life more effective. If you’ve any ideas/suggestions, please let me know. And thanks again for the great support in all four summit sessions!

John.

[For a PDF copy of the entire presentation, click here or on the smiley faces! For the sake of my poor blogsite, the much, much, larger keynote files are available on request.]

Older Entries Newer Entries