A friend told me she was unable to login to hotmail using Firefox anymore, getting some warning about not being a supported browser. She also had similar problems with Netflix. We’d recently released new updates on both Firefox 2.x and Firefox 3.0.x, so I stopped by her house to figure out what was going on, just in case…
Sure enough, we could easily reproduce the problems, so I started investigating. “hmmm… wonder what version she is using?
Ahhhhhhh!! A few minutes later I had installed FF3.0.5 over her existing FF1.0.7 installation.
- Firefox1.0.7 works just fine on PPC based Mac OSX 10.5.5. It had never crashed for her, not once.
- A pave-over install of FF3.0.5 over FF1.0.7 works just fine; history and home page were correctly handled in the pave-over install and worked perfectly. I didn’t check bookmarks, because she never uses them, preferring to use URLbar history instead.
- The amount of websites that still work with really old browsers is quite impressive. Obviously, new functionality might not be visible/usable, but having a website gracefully fallback in functionality so it still does *something* reasonable on older browsers is tricky to do, and I was impressed by the number of sites that made that effort.
“Oh, thanks for fixing that – but whats changed in this new version?”. Oh boy – whats changed between FF1.0.7 and FF3.0.5? Honestly, I didn’t know where to start. Tabs? Memory improvements? JS performance improvements? Awesome bar? Phishing protection? …? After a brief hesitation, I decided to only describe one improvement: how the browser can now check for new updates automatically, and showed her where this was set in preferences, so she should never be out-of-date like this again.
Heading home, thinking about it further, I still think that was a good choice, but it got me wondering what other people would choose. So, I’m curious – if you had to list just one “best new feature” since FF1.0.7, what would you choose?
Aki’s been busy; he’s rounded up 12 more n800/n810 devices, an old guitar pedal board, some velcro and sticky tape.Â Some firewall hacking, and we’re now seeing our first automated numbers up on staging graph server. My personal favorite details are the custom made stylus holders.
So far the heat from all the power supplies inside the case seems fine, but we’re keeping an eye on it.
This is really exciting progress, but we’ve still got a lot of work to do here.
Normal Talos runs do a lot of work to reduce noise and machine variance in the test results; we do a total of 15 iterations of the same test; (5 iterations on each of three separate machines). On these *slow* mobile devices, *one* iteration takes approximately 3 hours; good enough for us to check infrastructure is in place, but too noisy to be usable by developer – thats the results now visible on staging graphs above. Doing 5 iterations, like we do for PCs, would take 15 hours. Ideas we are investigating are:
- having multiple sets of devices running at staggered different times (one different set running every 8 hours, for example). We weren’t able to do this for PC builds because there was too much machine variance noise in the results, but maybe the ongoing rebootabilty work might finally solve that.
- only have one set of slaves running, but always test the latest pending build (skipping over other builds in the process). This avoids the machine variance noise, but would still peak out at one talos run every 15 hours, and if you miss that, you’d have to wait for the next 15 hour cycle to come around.
We’d like to be able to do more frequent runs if at all possible…any/all suggestions welcome! 🙂
When: 9th-11th December 1968.
Where: The Fall Joint Computer Conference 1968, here in San Francisco
What: Doug Engelbart demonstrates a wooden box with wheels which moves a “tracking spot” on the screen. Click here for the presentation. He then went on to demo hyperlinking in clip7, clip8 here. I found the entire presentation oddly reminded me of the movie “Dr Strangelove“, maybe it was the dress code, B&W camera angles and voice echos, but still…surreal!
- he showed two different mice – one with the wire out the front, one with the wire out the back.
- quote of the day: “I dont know why we called it a mouse, sometimes I apologize; it started that way, and we never did change it”.
- seems that integrating todo lists with maps is still a killer app!
- in related news, Logitech built its billionth mouse last week – that is billion with a B!!
Ted’s recent landing of bug#421534 was blogged about by Ben Hearsum here. While it might initially seem like a real snooze, its actually really important for our ability to support multiple active project branches.
Each build slave keeps copies of the srcdir/objdir directory tree. Fair enough. However, we keep multiple copies. By the time you keep a directory tree for incremental builds, debug builds, leak builds, nightly builds, and release builds… and do that for each of mozilla-central, tracemonkey and mozilla-1.9.1… suddenly that all adds up.
By far the worst offender here was the Mac, with the “-save-temp” workaround for a compiler problem forcing each tree we kept to be huge (10GB); with Ted’s fix in place, that same tree is 3GB. Ted’s work on DWARF buys us a bunch of breathing room so we can still create new project branches as needed right now, while we continue to look for a more scalable solution.(any ideas/suggestions very very very welcome!)
Given the number of directory trees, and the number of active branches, buying more disks is only a short term stopgap solution. With the number of machines we have, its also an expensive short term stopgap. In terms of building scalable infrastructure, it would be great if each slave be able to support 5-6 different types of builds, across 8 or more simultaneous different project branches. We might never quite reach a perfectly clean “leave-no-trace” build, but every time we reduce the amount of space a build tree consumes on the slaves, it helps us scale up the number of build types, and project branches, we can provide for developers.
“…1941 â€” a date which will live in infamy â€” the United States of America was suddenly and deliberately attacked by naval and air forces of the Empire of Japan.”
A lot has changed in the last 67 years, politically, economically and personally, it was the year my father was born. But listening to President Roosevelt’s live speech the morning after Pearl Harbor still gives me goose-bumps. Every time. Its only 6 mins long, so worth a listen (click here for wikipedia’s ogg/vorbis version). The written speech misses out the emotion, timing, and reaction of the people present.
The interesting little legal detail here is that, back in 1941, the only way people got to hear these broadcasts were over the commercially owned airwaves, or commercial newsreels in cinemas. This has the unusual side effect of many recordings of historically important national events being owned by private commercial broadcasting companies, not by the public.
Here in the US, the incoming Obama administration is making an important move to use new technologies to get government owned content directly to the public. This makes lots of sense in a generation where people get their news on The Daily Show, and google.com/news while NPR and traditional printed newspapers try to survive with plummeting audiences. Changing the broadcast medium is just one part. It’s also important that the new content be available to the public, not just owned by just a different private commercial company. The work that Mitchell and John Lilly have been doing here with Creative Commons licensing is literally history in the making.
Since Feb 2008, we’ve fixed:
- a talos redness that hit us 2-3 times *every* day since Talos first started (caused by a synchronization problem between the build and talos systems)
- made it possible for IT to support these Talos machines as part of their oncall work (making all the Talos machines boot cleanly into working state).
- simplify (at least a little) how to setup new Talos machines on a new project branch.
Building on those fixes, we’re now focusing again on reducing the variances between the different Talos machines. We’ve always tried making the machines be as identical as possible to each other (sequential serial numbers, carefully controlling what is installed, etc), but even so, still there was a lot of variance in the test results. Last week however, after weeks of testing, Alice and Chris AtLee made a breakthrough:
Having Talos machines cleanly reboot after completing every 5th job, and before accepting another job means that developers never see any burning. There’s still details to be worked out before we can roll this out across all Talos machines across all branches… but so far, this looks really encouraging.
For background info, have a look at bug#463020 and the live graph on graphserver. If you have any suggestions, or ideas which might help, we’d love to hear them.
Canadian journalist Melissa Fung was recently released after being kidnapped and held hostage in Afghanistan for a month. Just after her release, she did an interview with CBC (her employer) which struck me as extraordinarily honest, direct, personal and respectful retelling of everything that happened.
I found her calm presence of mind throughout the experience quite impressive, and at the same time, humbling. Some of that probably came from the specialized K&R training before traveling on assignment, and the rest had to be from the strength of her core inner personality.
The interview is almost an hour long and well worth watching.
We had a lot going on during Monday’s downtime. One small, yet important, event was the powering down of the 7 Talos machines running on FF2.0.0.x / mozilla-1.8.0.
These machines came online in December2007, and were originally needed for performance comparisons with the “new development work” going into Firefox3.0 at the time. Which was great. But, we’ve long since shipped Firefox 3.0, and most developers have moved on to FF3.1 and FF3.next. The world has moved on, so powering off these machines seemed useful because:
- these machines can now be recycled and put to use where we need them more – doing talos or build work on other, more active, code lines.
- this simplifies our life when doing Talos improvements for FF3.0,Â FF3.1 and mozilla-central. There’s interesting differences in how Talos collects info across the various release branches. By turning off this set of Talos machines, it removes one set of cross-testing we have to do. This improves our “drag coefficient” when making changes to Talos for other branches.
- minor incremental reduction of load on graphserver (recall that on FF2, we build continuously, meaning we also test, and post results continuously!)
- no-one was using them anymore. We spend much of our time supporting machines that people actually use. This made powering off unused machines, especially before the EOL of FF2.0.0.x, a psychologically important milestone.
Many thanks to everyone for the honest, and flexible, discussions in bug#463325 and the “Power off unused FF2.0, FF3.0 machines?” thread in dev.planning leading up to this.