Creating multi-lingual versions of Firefox and Thunderbird?

Last week, I was at a language school in Shingu, Japan that uses both Firefox and Thunderbird – and they *love* what we do. The people working in the office are from all over the world – native speakers in various languages. When they found out I worked at Mozilla, they asked if there was a way to switch locales depending on who is using the various shared computers in the office.

To be clear, they were not asking about having ‘n’ installations, one per user, each installation in a different locale. Instead, they were asking about having one installation that can switch from displaying all menuitems/dialogboxes/etc from displaying in one locale to displaying in another locale. This would allow all users to see the same shared browser history in Firefox, same company email inbox in Thunderbird, but each be able to use the computer in their own native language. Students learning new languages could also try using different locales when they are helping out in the office.

On my computer, with en-US TB2.0.0.21, I was able to install the QuickLocaleSwitcher addon, install the jp-JP-mac language pack xpi file from ftp.mozilla.org, restart Thunderbird, and bingo – I now had ja-JP-mac TB2.0.0.21! Success. ๐Ÿ™‚ I repeated the same experiment using bsmedberg’s Locale-Switcher addon instead, and it worked equally well. I then repeated these two experiments with Firefox 3.0.7 and they both worked perfectly, as expected. So far so good.

1) Gotcha: One of the computers has ja-JP TB2.0.0.21 installed, so to be able to switch it to en-US meant installing the en-US xpi file. But it turns out we never publish the en-US xpi file on ftp.mozilla.org, so I could never install and switch to en-US. While I could install any other locale, including different english locale, like en-GB, but that wasn’t the point. The only way to switch from any locale to en-US was to first download the en-US build?!? Huh? Where was the en-US xpi file? Turns out, while we publish xpi files for all other locales, we’ve never ever published the en-US xpi file. I’ve filed bug#485860 to have release automation publish the en-US xpi file, alongside the xpi files for all the other locales.

2) Idea: We currently create one separate download per locale. 70 locales means 70 downloads. What if we also created one additional 71st download which contained all 70 language packs and one of these two addons out-of-the-box? For some specific types of users, I think this might be useful, so I’ve filed bug#485861 to track this. Initial thoughts:

  • I’m not talking about modifying the existing locales that we already publish. Instead, we’d just produce one additional new locale repack (called “multi-lingual”?), and make it available on the usual all.html page.
  • This would be a bigger download, because it has all the xpi files. Back of envelope estimate is 7.4MB bigger before any installer compression.
  • There’s some concern that having multiple xpi files would cause a slower startup time on the very first startup after installation, but not on subsequent startups. mossop is investigating if this is the case, and if so, to what extent.
  • Ideally this multi-lingual release would have the addon and multiple xpi files included, so it would just work out-of-the-box. But I’ve no idea whats involved with pre-bundling a specific addon in Firefox or Thunderbird. However if we produce a multi-lingual release containing all the xpi files, but not either addon, and we ask the user to install the addon manually, we’ve still simplified the setup steps that a multi-lingual user needs to follow.
  • I’m still trying to figure out the update scenarios to see if there’s any gotchas.

What do people think? Also, are there any other gotchas people can think of?

tc
John.

When (small) things go bump in the (big) night

Off topic, and not something I would usually blog about, but the following three unrelated items caught my eye recently:

  • The Atlantic ocean is big, and submarines are relatively tiny
    04feb2009: The British submarine HMS Vanguard and French submarine Le Triomphant collided in the Atlantic, at an unspecified location and depth. Click here for Wikipedia article on the accident. Good news is that nobody was reported killed or injured and both vessels were able to make it home under their own steam. Lets not think too much about the fact that they are both nuclear powered subs each carrying 16 nuclear missiles to serve in a nuclear deterrent role.
  • Space is big, and satellites are relatively tiny
    17Feb2009: An obsolete one-ton Russian communications satellite collided with a very-much-in-use-thank-you half-ton U.S. communications satellite. Both were destroyed, littering the orbit area with debris which will cause more fun for other satellites in the same orbit.

Word of the day: Otaku

Before I left Tokyo, Gen kindly invited me along to a talk by Patrick Macias at a local campus of Temple University entitled “Otaku Power: Trivia, Desire and Transformation”.

The talk was really fascinating, and touched on a bunch of different topics:

  • on Otaku in several different forms, some very public events (mostly good, some bad) in/around Akihabara (the electronics-and-now-also-manga district here in Tokyo). Social stigmas, or sometimes pride, of being called an Otaku, and how that compared to terms like nerd or geek in the US. The public cos-play which I’ve seen quite a few times since arriving in Tokyo.
  • a discussion about how how animated games & movies were rated in the US, but how anime/manga was not, even though there’s a wide range of subject material covered, and some can be quite explicit.
  • cultural factors in the drawing of the same/similar characters. One example was how the dress code from MachGoGoGo in Japan in the 1960s was toned *down* for the remake as SpeedRacer in the US in 2009! Similarly, the dress code for AstroBoy of Japan in the 1950s had to be toned down for the AstroBoy movie in the US being released late 2009. Another example he used was Alice in Wonderland (from right to left, the “original puritanical british edition”, the squeeky-clean Disney version, and the modern-day Japanese version).

There was a lively interaction with a very well-informed audience, and I really enjoyed the whole surprise glimpse of another side of Japan. Thanks again, Gen!!

Making unittest life better – whats next?

Since writing this post and then this post, we now have “unittests on try” running on production. Big tip of the hat to Lukas and Catlee for that! So, whats next on our “make unittest better” ToDo list?

1) separate out unittest from build
Basically, running each unittest suite concurrently on separate slaves should improve end-to-end turnaround times. Its also a pre-requisite for helping track down intermittent unittest failures. See here for more details, and some pretty diagrams.

Ted’s just finished fixing the makefiles in bug#421611 so, in theory, its now possible to run a unittest suite on a different machine from where the unittest build was done. Now, the next phase of work separating builds from unittests can begin. RelEng will start using these new makefile targets throughout our infrastructure. We’ll also start to publish those partial-debug builds someplace for these standalone unittest suites to use.

Once a suite running standalone is confirmed to give same results as the same suite running as part of the “build-and-unittest” job, then we’ll disable that specific suite from the “build-and-unittest” job, and use the standalone unittest job in production. Each standalone unittest suite can be run concurrently, so this gets us better turnaround times.
Funny enough, after all the behind-the-scenes complex systems infrastructure work to make this possible, we’ve still have to decide the best way to display this information in tinderbox waterfall?!?! Urgh.
We’ll have to do this for both production pool-of-slaves and also the try pool-of-slaves before we can declare this done. The curious can follow along in bug#474671, bug#452861.

2) Run unittests on full-debug builds, not partial-debug builds
Until now, unittests have always run on partial-debug builds. This requires us to produce a special build *only* for running unittests, which we then throw away afterwards. This is separate from and additional to the opt build and full-debug builds that we also produce and publish.

Changing unittests to run against full-debug builds will require test/code cleanup around how exceptions are handled, but is a good thing to do for two reasons:

  • its something developers have been asking for, as it simplifies debugging unittest failures. We’ve just not been able to get to before now.
  • this will allow us to do 1 fewer build per o.s. per checkin. This might not sound like much, but when we’re dealing with 900-1100 pushes per month, and each push triggers 7-11 builds, optimizing how we use pool-of-slaves capacity really matters. Data for recent months is here Jan2009 , Feb2009

The curious can follow along in bug#372581.
3) automatically detect intermittent test failures
We all know some unittests are failing intermittently right now, bug#438871 and also the work Clint is doing here should help clean that all up. However, once all the unittests are fixed up, how do we make sure we dont drift back into this state again – can we automatically detect new intermittent regressions as they creep back in?

To automatically detect this, we’re going to periodically run the same unittest suite ‘n’ times in a row on the same identical build. We don’t care so much what the test results are, we just care that we should get *identical* results each time; after all its the same test suite being run & re-run on the exact same build. Any variation in test results will be flagged to QA and Dev to investigate.

  • Part of this will involve scheduling batches of the same suite to be run. Running five-times-in-a-row, once-per-week sounds like a good start, but we’ll tweak the #iterations, and the frequency of runs, as we experiment.
  • This assumes we’ve already separated out the build from the running of the unittest (see above). Running test five-times-in-a-row will take time; running build-and-test five-times-in-a-row will take *lots* of time, and is testing on slightly different bits each time anyway, so not good.
  • Part of this will involve automating how we detect and track those intermittent failures. Doing this manually simply does not scale. We dont have a bug on file yet for this, because we’re still figuring it out. The Open Design lunch we had a few weeks ago was a start, but any comments and suggestions would be great!

Hope all that makes sense.

First impressions of “World of Goo”

After reading KaiRo’s blog post about the game “World of Goo“, I had to give it a try. Its really addictive. I’ve worked through the entire demo level of the game on the flight from SFO->NRT, and really like it.

The game play is well designed, totally self-evident to play and I found it really compelling and humorous at the same time. Each level has the usual “do this to complete and move to next level”. But each level also had a 2nd, more complex, goal called an OCD goal – yes, that really is an “Obsessive Compulsive Disorder” Goal. Not sure what this says for my personality, but once I found those, it was no longer enough to complete a level and move on, now I had to obsessively complete the level.

The graphics and audio are outstanding – check out all the clips on YouTube for a quick glimpse. The characters are just cute enough to empathize with, and draw you into the game, yet not so cute as to be annoying. I quickly found myself caring when a ball-of-goo came to an untimely demise. Caring – you know, empathizing with a computer-graphic-image of a small ball of oily-looking-goo-with-eyeballs?!?! I liked Lemmings also and Kairo’s comparison is right-on-the-nail there.The choices of background music fit perfectly with each level, and set the tone perfectly for the challenge in the level each time. I found myself wanting to leave the computer running in levels I’d already completed, just for the music.

A lot of effort went into the design of the game, and user interface, to help keep the docs/instructions down to very brief humorous “signposts” within the game play. Looks like it should be a relatively easy game to localize, which is always a big win. Also, its a welcome change from the push-graphics-cards-to-the-limit intense first-person-shooter games, which are fun in a different way, but take large blocks of continuous time. I know the fluid-dynamics calcs are non-trivial, but given how simplistic the graphics and user interface are, I wonder if there is a Android or iPhone version in the works. (hint! hint!)

Overall, two-thumbs up from me. Here’s a more formal review that I thought was done in a interesting all-you-can-discover-over-lunch format.

But the best way is to download and try it. Go on – you know you want to!

tc
John.

ps: hey, Sean did you know there is a version for Wii?! ๐Ÿ™‚

User interface stories from a Shingu hotel

Today’s hotel had the following tap.

  • you only have one set of temperature controls
  • you can only have a shower *or* a bath; not both simultaneously
  • the dial on the front lets you choose between bath/shower. It rotates clockwise 90degress to the horizontal position, but immediately falls back to pointing vertically down; there was no click/resistance/activator, so I assumed it was broken. However, when you turn on the water, and have water pouring into the bath, and *then* turn that dial, the water pressure holds the lever horizontal and you get a shower! Success! ๐Ÿ™‚

Sometimes, it feels like I’m living *in* some sort of adventure-puzzle-game, part of the fun of traveling….hmmm, which reminds me, looks like they are making progress on on the new Legends-of-Zork.

Directions to Hombu Dojo, Tokyo

While in Tokyo, I went to train at: Aikikai Foundation / Aikido World Headquarters / Hombu Dojo
Address: 17-18 Wakamatsu Cho, Shinjuku-ku, Tokyo, 162-0056 Japan
Phone: (+81) 3-3203-9236, Fax: (+81) 3-3204-8145
Email: aikido@aikikai.or.jp
website: http://www.aikikai.or.jp/eng/index.htm (with class schedule, instructor roster, etc).

As I got lost each time, I’ve put these notes together to help me (and anyone else who’s reading!) next time.

First,ย print out this PDF of the subway map on a *color* printer. It was invaluable when lost or even just trying to confirm if I was on the correct train going the right way. Take note that on the subway map each station has a name in Kanji, a name in ascii, and a letter-plus-two-digit code. These letter-plus-two-digit codes are clearly posted in every station, on all maps, and were essential when trying to quickly figure out if I had missed my stop, or if I was on train going the wrong way.

Take the metro to Wakamatsu-kawada (“E03”).

Note: if coming from Shibuya (“F16”),ร‚ย  you take the brown Fukutoshin line to Higashi-shinjuku (“F12/E02”), and change there to the Oedo line. However, some Fukutoshin trains are express trains and do not stop at Higashi-shinjuku (“F12/E02”). According to their schedule, 1 train in 3 is an express train, but I managed to get on an express train *every* time I’ve went to Hombu. The platform displays will tell you if the next train is express or local – now that you know to look!!

  • in Wakamatsu-kawada (“E03”), take the Kawadocho exit
  • turn left and walk down the sidewalk of this main road
  • continue walking down this road for approx 5mins, when you should see a large “Eneos” petrol/gas station on the left hand side, just as the road slightly bend to the right.
  • a few buildings after the Eneos, there is a pedestrian crossing, and also the entrance to a lane on the right hand side. Cross over the road at this crossing, and then start walking down the lane.
  • after a couple of minutes walking in a straight line down this lane, you will see the dojo on your left hand side. It looks like this: (photo has creative commons license on wikipedia, see details here).
  • walk in the glass door, and introduce yourself to the staff in the glass booth on the right hand side. They will handle all the registration, schedules and fees.

Once inside, the main dojo mat space is on the 3rd floor, and looks like this:

Class was intense; 60-65 students in what felt like very crowded mat space. The students were all great to train with and were quite friendly!! One student claimed that the morning classes had 100+ students, which seemed insane to me, already there was soon many people . However, it did explain the racks of white gi outfits drip drying after class.

Unittest and l10n moved from “dedicated specialized slaves” to “pool of identical slaves”

Last summer we’d changed from our “build on dedicated machines” approach:

…to instead run builds as jobs submitted to identical slaves within a pool-of-slaves:

That reduced the risk that we’d have to close the tree for machine failure of one build machine. It also made setting up builds on new branches like Tracemonkey relatively quick and relatively easy. (more details here).

However, that was only the first milestone.

We still had l10n and unittests running on separate dedicated machines – which meant that l10n and unittests still had the same problems that build machines used to have:

  • hard to setup on new active code lines, like when we started tracemonkey
  • tree vulnerable to closure when a machine fails.
  • spikes in load would backlog, rather then loadbalance onto other available slaves.

Sorting out the difference between the unittest, l10n and build machines was fiddly, as these sets of machines all have different backgrounds. Reconciling all the different user a/cs, toolchains, env.variables, directory structures, all took patient de-tangling. And every change required a bunch of testing to make sure it didnt introduce breakage in some other part of the infrastructure or on some other branch. While some of this could be done in staging, small chunks would be rolled into gradually rolled into production, and then if all still looked good, we’d go back and take on the next part.

In late Dec2008, Lukas and Chris AtLee got unittests to run on machines in the pool-of-slaves. This means that any queued pending unittest job could be handled by any slave in the pool, and after running the two systems side-by-side for a while, we were able to turn off the old dedicated unittest machines, reimage them like the other slaves in the pool-of-slaves, and add them to the pool.

Just a week/two ago, in Mar2009, Armen, Axel and Chris Cooper got l10n repacks to run on machines in the pool-of-slaves. This meant that any queued l10n-repack job could be handled by any slave in the pool, regardless of whether its an l10n-repack-on-change, an l10n nightly or an l10n release. Again, after running both systems side-by-side for a while, we’re powering off the old l10n systems, reimaging them and adding them to the pool as more general purpose slaves in the pool.

OK, cool pictures, but so what?

Well, this is really exciting because its:

  1. More reliable: if one machine dies, we fail over to another machine
  2. More scalable: loadbalance incoming jobs across branches across all available slaves. No more backlog on one branch while a dedicated machine sits idle on another branch. No more trying to predict how busy will a project branch be, in order to decide whats the least amount of dedicated machines to create on a new project branch.
  3. Quicker setup:
    • we can now enable unittests wherever we have builds slaves running. One example is unittests being enabled on TryServer, which Lukas announced recently (See Lukas’s blog about linux, mac, win32 announcements).
    • setting up a new project branch, running builds *and* unittests and even l10n is now much quicker because we’re not setting up new dedicated machines each time. Instead, we’re scheduling extra jobs in the master queue.
    • similarly, scheduling completely new types of jobs like shark builds, or code coverage runs is getting simpler, again because we’re not setting up new dedicated machines, we’re scheduling extra jobs in the master queue.
  4. Faster end-to-end time: Running l10n repacks as individual repacks all submitted concurrently to the pool of slaves means that we get *much* faster turnaround times. Each repack is only a few minutes, but with almost 70 locales per o.s., that quickly adds up. The FF3.1b3 release was the first time we ran l10n repacks concurrently like this, and we saw the following improvements:
    • linux: reduced from ~1h15m -> 20mins
    • mac: reduced from 1hr -> 20mins
    • win32: reduced from ~6hrs -> 1hr

    Later, once we get past some unittest framework cleanup, we should be able to run unittests without requiring an additional unittest-specific build first (see blog for details). Once that’s fixed, we can then start running individual unittest suites concurrently on the pool-of-slaves, which means:

    • developers see much faster turnaround time on unittests.
    • we can automate running one suite ‘n’ times in a row on the *same* build, to help QA hunt down intermittent unittest failures.

More reliable. More flexible. Easier setup. Faster end-to-end times. Whats not to love?

After all these months of behind-the-scenes work, its great to finally see these changes finally hitting the light of day, and I’m really proud of all the work people did to make this happen.

tc
John.
=====
Disclaimer: I’ve excluded FF2 and Talos systems from this blogpost, just to keep the diagrams manageable. More on those soon.

Fun and Games with Major Updates (more followup)

AndersH asked an important question in comments to my last post., which deserved a proper response, hence this blog post. Thanks AndersH, sorry, I should have been more clear.

We do *today* have a major update offer visible to FF1.5.0.12 users, which will major update them to FF2.0.0.6. A user on FF1.5.0.12 who decided to move to the latest and greatest Firefox would end up going from FF1.5.0.12 -> FF2.0.0.6 -> FF2.0.0.20 -> FF3.0.5 -> FF3.0.7. Thats four updates in a row. Here’s a diagram, which shows what major and minor update offers are currently available, which might help.

The “new process” I was trying to describe above would re-generate a major update offer for the older line every time we did a minor releaseร‚ย  on the newer line. The diagrams look simple enough…

…but the consequences are important. If we could rewind time, and had used the new proposed process, the diagram for today would instead look like this:

It would also mean that:

  • The FF1.5.0.12 user would now get to the latest and greatest by going FF1.5.0.12 -> FF2.0.0.20 -> FF3.0.7. Thats only two updates in a row, not four.
  • User would be able to see those major update offers more often, so motivated users would more often be *able* to major update.
  • Any FF2.0.0.20 users who hit “Later” on a major update offer would be reprompted with a new major update offer every time we produce a new FF3.0.x dot-release. (approx 4-6 weeks).
  • Any FF2.0.0.20 user who hit “Never” on a major update offer would never be reprompted for any new major update offer, even when we produce new major updates.

Hope that clarifies. Thanks again AndersH for the excellent question, and again, sorry for not being more clear the first time.

tc
John.

UPDATE: Thanks to nthomas for correctly pointing out that the FF1.5.0.12 major update goes to FF2.0.0.6, not FF2.0.0.4. Updated text, and diagrams. joduinn 16mar2009.

Netbooks being disruptive in Tokyo

A few days ago, I visited Akihabara here in Tokyo – otherwise known as “Electric Town”. When you come up from the Akihabara (H15) metro stop, the first store right in front of you is Yodobashi Camera Store. Don’t let the name fool you; its not just a camera store. Its huge – I’ve been in many smaller shopping malls in US and Ireland.

They had everything from 103″ TV (unstitched – its truly one giant LCD screen, not a collection of smaller LCD screens) to computer chips to robots to freestanding washing-machines being demo’d *running* on the sidewalk outside the front door. I remember one really long aisle just for computer mice, while graphics tablets were in the next aisle. The rest was a blur.

The bit that really got me was the netbooks. They took up most of the ground floor, and were definitely where most of the lights, shouting, bell-ringing and crowds were. It was quite overwhelming – reminded me of some rowdy bazaar-like atmosphere.
The range of netbooks available was dizzying; I tried to count but kept losing track – say 25-50 brands? All the usual brands I knew, but then many more I’d never heard of before. There were plenty that were too small for me to ever use – keyboards so small that I had a hard time clearly hitting one key at a time with one finger. There was a few machines that had keyboards acceptable for short periods. The HP2133 had an outstandingly great large keyboard – I was surprisingly even able to touch type on this, and came away convinced I could use it as my primary daily keyboard!

The interesting part for me was the two ways you could buy these netbooks. Some people were paying the $200-$400 for their new netbooks. However, most people were buying them as part of a data-only plan with a cellphone provider. One typical plan I saw with eMobile was:

  • 2yr contract
  • $50-ish for the netbook (eeePC, or HP or lenovo or…). Some vendors gave you the netbook for free.
  • free wireless-data card, specific to that cellphone provider
  • $50 per month for unlimited data usage anywhere in Japan, no roaming fees
  • 7.2GBMbps (megabits per second) wireless connection in every major city in Japan (gaps in coverage in portions of countryside). Even you only get half that, 3.6GBMbps, with a bad signal somewhere, its still faster then what I get in my house in San Francisco.

Its not technically a cellphone, but with that kinda bandwidth, skype and a bluetooth headset, whats the difference? And with those kinds of deals, who needs home DSL or home cable anymore?

This is definitely a big new disruptive trend in computers, and I’m happy to see this spinning up as a side-effect of the OLPC project. We live in interesting times!

(Side note: for a machine that is basically a web browser running on a keyboard-screen-wireless-connection, I was disappointed to see how many were running WinXP with the default bundled Internet Explorer. It might have been somewhere in the smallprint, but I didnt see any being demo’d running linux or with Firefox pre-installed!)

UPDATE: fixed typo on units of connection speed – sorry about that – wireless connection of 7.2 GigaBytes per second would be quite amazing. Also, didnt know this was available within the US, I’ll investigate that! joduinn 14mar2009