Increase growth and revenue by becoming distributed

1 Comment

In my recent blog post about the one-time-setup and recurring-costs of an office, I mostly focused on financial costs, human distraction costs, and the cost of increased barriers to hiring. This post talks about another important scenario: when your physical office limits potential company revenue. is a company in London, England, which connects companies that need help with data science problems with Ph.D data science graduates who are leaving academia looking for real-world problems to solve. This 2.5 year old company was founded by Dr Kim Nilsson, (ex-astronomer and MBA!), and as of today employs 4 people.

For Pivigo to be viable, Kim needed:

  • a pipeline of companies looking for help with their real-world Data Science problems. No shortage there.
  • a pipeline of Ph.D graduates looking for their “first non-academic” project. No shortage there.
  • a carefully curated staff of people who understand both Academic and Commercial worlds are essential to help keep things on track, and make sure the event is a success for everyone. Kim has been quietly, diligently working on growing a world-class team at Pivigo for years. Tricky, but Pivigo’s hiring has been going great – although they are always interested to meet outstanding people!
  • a physical place where everyone could meet and work together.

Physical space turned out to be the biggest barrier to Pivigo’s growth and was also the root cause of some organizational problems:

1) Venue: The venue Pivigo had guaranteed access to could only be used once a year, so they could only do one “event” each year. Alternate venues they could find were unworkable because of financial costs, or the commute logistics in London. Given they could only have one course per year, it was in Pivigo’s interest to have these classes be as large as possible. However, because of the importance of creating a strong network bonding between the participants, physical size of venue, and limits on skilled human staffing, the biggest they could do was ~80 people in this once-a-year event. These limits on the once-a-year event puts a financial cap on the company’s potential revenue.

2) Staffing: These big once-a-year events were super-disruptive event to all the staff at Pivigo. Between the courses, there was administrative work to do – planning materials, interviewing candidates and companies, arranging venue and hotel logistics, etc. However, the “peak load time” during the course clearly outscaled the “low load time” in between courses. Hiring for the “peak load times” of the courses meant that there would be a lot of expensive “low load / idle time” between each peak. The situation is very similar to building capacity in fixed cost physical data centers compared to AWS variable-by-demand costs. To add to the complexity, finding and hiring people with these very specialised skills took a long time, so it was simply not practical to “hire by the hour/day” a la gig-economy. Smoothing out the peaks-and-troughs of human workload was essential for Pivigo’s growth and sustainability. If they could hold courses more frequently, they could hold smaller, more frequent, courses and reduce the “peak load” spike. Also, changing to a faster cadence of smaller spikes would make Pivigo operationally more sustainable and scalable.

3) Revenue: Relying on one big event each year gives a big spike of revenue, which the company then slowly spends out over the year – until the next big event. Each and every event has to be successful, in order for the company to survive the next year. This makes each event a high-risk event for the company. This financial unpredictability limits company long term planning and hiring. Changing to smaller, more frequent, courses make Pivigo’s financial revenue stream healthier, safer and more predictable.

4) Pipeline of applicants: Interested candidates and companies had a once-a-year chance to apply. If they missed the deadline or were turned away because the class was already full, they had to wait an entire year for the next course. Obviously, many did not wait – waiting a year is simply too long. Holding these courses more frequently make it more likely that candidates – and companies – might wait until the next course. Finding a way to increase the cadence of these courses would improve the pipeline for Pivigo.

If Pivigo could find a way to hold these courses more frequently, instead of just once-a-year, then they could accelerate growth of their company. To do this, they had to fix the bottleneck caused by the physical location.

Three weeks ago, Pivigo completed their first ever fully-distributed “virtual” course. It used no physical venue. And it was a resounding success. Just like the typical “in-person” events, teams formed and bonded, good work was done, and complex problems were solved. Pivigo staff, course participants and project sponsors were all happy. Just like usual.

This maps shows everyone’s physical location.
Map of locations

To make this first-ever fully-distributed “virtual” S2DS event successful, we focused on some ideas outlined in my previous presentations here, here and also in my book. Some things I specifically thought were worth highlighting:

1) Keep tools simple Helping people focus on the job-at-hand required removing unnecessary and complex tools. The simpler the tools better. We used zoom, slack and email. After all, people were here to work together on a real-world data science problem, not to learn how to use complex tools.

2) Very crisply organized human processes. None of these people were seasoned “remoties”, so this was all new to all of them. They first met as part of this course. They had to learn how to work together as a team, professionally and as social humans, at the same time as they worked on their project which had to be completed by a fixed deadline.

3) As this was Pivigo’s first time doing this, Kim made a smart decision to explicitly limit size, so there were only 15 people. This gave Kim, Jason and the rest of the staff extra time and space to carefully check in with each of the remote participants and gave everyone best chance of success. Future events will experiment with cohort sizes.

4) Each participant said that they only applied because they could attend “remotely” – even though *none* of them had prior experience working remotely like this. Pivigo were able to interview, and recruit participants who would normally not even apply for the London-based event. The most common reason I heard for not being able to travel to London was disruption to parents with new children – successful applicants worked from their homes on real-world problems, while still being able to take care of their family. The cost of travel to/from England, and the cost of living in London were also mentioned. The need and demand was clearly there. As was their willingness to try something they’d never done before.

5) I note the diversity impact of this new approach. This cohort had a ratio of 26% female / 74% male, while prior in-person S2DS classes typically had a ratio of 35% female / 65% male. This is only one data point, so we’ll watch this with the next S2DS event, and see if there is a trend.

The Virtual S2DS programme was a success. The project outcomes were of similar quality to the campus based events, the participants felt they got a great experience that will help their careers going forward, and, most importantly, the group bonded more strongly than was expected. In a post-event survey, the participants said they would reach out to each other in the future if they had a question or a problem that the network could help with. Interestingly, several of them also expressed an interest in continuing remote working, something they had not considered before.

For Kim and the Pivigo team, this newly-learned ability to hold fully distributed events is game-changing stuff. Physical space is no longer a limiting factor. Now, they can hold more frequent, smaller courses – smoothing down the peaks and troughs of “load”, while also improving the pipelines by making their schedule more timely for applicants. Pivigo are investigating if they could even arrange to run some of these courses concurrently, which would be even more exciting – stay tuned.

Congratulations to Kim and the rest of Pivigo staff. And a big thank you to Adrienne, Aldo, Christine, Prakash, Nina, Lauren, Gordon, Lee, Christien, Rogelio, Sergio, Tiziana, Felipe, Fabio and Mark for quietly helping prove that this approach worked just fine.

John & Kim.
ps: Pivigo are now accepting applications for their next “virtual” event and their next inperson event. If you are an M.Sc./Ph.D. graduate, with a good internet connection, and looking for your first real-world project, apply here: Companies looking for help with data science problems can get in touch with Kim and the rest of the Pivigo team at

“Distributed” ER#3 now available!

No Comments

Book Cover for DistributedEarlier this week, just before the US Thanksgiving holidays, we shipped Early Release #3 for my “Distributed” book-in-progress.

Early Release #3 (ER#3) adds two new chapters: Ch.1 remoties trends, Ch.2 the real cost of an office, and many tweaks/fixes to the previous Chapters. There are now a total of 9 chapters available (1,2,4,6,7,8,10,13,15) arranged into three sections. These chapters were the inspiration for recent presentations and blog posts here, here and here.)

ER#3 comes one month after ER#2. You can buy ER#3 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already has ER#1 or ER#2 should get prompted with a free update to ER#3. (If you don’t please let me know!). And yes, you’ll get updated when ER#4 comes out next month.

Please let me know what you think of the book so far. Your feedback get to help shape/scope the book! Is there anything I should add/edit/change? Anything you found worked for you, as a “remotie” or person in a distributed team, which you wish you knew when you were starting? If you were going to setup a distributed team today, what would you like to know before you started?

Thank you to everyone who’s already sent me feedback/opinions/corrections – all changes that are making the book better. I’m merging changes/fixes as fast as I can – some days are fixup days, some days are new writing days. All great to see coming together. To make sure that any feedback doesn’t get lost or caught in spam filters, it’s best to email a special email address (feedback at oduinn dot com) although feedback via twitter and linkedin works also. Thanks again to everyone for their encouragement, proof-reading help and feedback so far.

Now, it’s time to get back to typing. ER#4 is coming soon!


The real cost of an office


Woodwards building Vancouver demolition 2 by Tannoy | CC BY-SA 3.0 via Wikimedia Commons

The shift from “building your own datacenter” to “using the cloud” revolutionized how companies viewed internal infrastructure, and significantly reduced the barrier to starting your own fast-growth, global-scale company. Suddenly, you could have instant, reliable, global-scale infrastructure.

(Personally, I dislike the term “cloud” but it’s the easiest vendor-neutral term I know for describing essential infrastructure running on rent-by-the-hour Amazon AWS, Google GCE, Microsoft Azure and others…)

Like any new major change, “the cloud” went through an uphill acceptance curve with resistance from established nay-sayers. Meanwhile, smaller companies with no practical alternatives jumped in with both feet and found that “the cloud” worked just fine. And scaled better. And was cheaper to run. And was faster to setup, so the opportunity-cost was significantly reduced.

Today, of course, “using the cloud” for your infrastructure has crossed the chasm. It is the default. Today, if you were starting a new company, and went looking for funding to build your own custom datacenter, you’d need to explain why you were not “using the cloud”. Deciding to have your own physical data center involves one-time-setup costs as well as ongoing recurring operational costs. Similarly, deciding to have a physical office involves one-time-setup costs as well as ongoing recurring operational costs.

Rethinking infrastructure from the fixed costs of servers and datacenters to rented by the hour “in the cloud” is an industry game changer. Similarly, rethinking the other expensive part of a company’s infrastructure — the physical office — is an industry game changer.

Just like physical datacenters, deciding to setup an office is an expensive decision which complicates, not liberates, the ongoing day-to-day life of your company.

The reality of having an office

It is easy to skip past the “Do we really need an office?” question – and plunge into the mechanics, without first thinking through some company-threatening questions.

What city, and which neighborhood in the city, is the best location for your company office? Sometimes the answer is “near to where the CEO lives”, or “near the offices of our lead VCs”. However, this should include answers to questions like “where will we find most of the talent (people) we plan to hire?” and “where will most of our customers be?”.

What size should your office be? This requires thinking through your hiring plans — not just for today, but also for the duration of the lease — typically 3–5–10 years. The consequences of this decision may be even longer, given how some people do not like relocating! When starting a company, it is very tricky to accurately predict the answers to these questions for multiple years into the future.

Business plans change. Technologies change. Market needs and finances change. Product scope changes. Companies pivot. Brick-and-mortar buildings (usually) stay where they are.

If you convince yourself that your company does need a physical office, setting up and running an office is “non-trivial”. You quickly get distracted by the expensive logistics and operational mechanics of a physical building – instead of keeping focus on people and the shipping product.

You need to negotiate, sign and pay leases. Debate offices-with-doors vs open-plan — and if open-plan, do you want library-quiet, or bull-pen with cross-chatter and music? Negotiate seating arrangements — including the who-gets-a-window-view debate. Construct the actual office-space, bathrooms and kitchens. Pick, buy and install desks, chairs, ping-pong tables and fridges. Set up wifi, security doorbadge systems, printers, phones. Hire staff who are focused on running the physical office, not focused on your product. The list goes on and on. All of these take time, money and most importantly focus. This distracts humans away from the entire point of the company — hiring humans to create and ship product to earn money. And the distraction does not end once the office is built — maintaining and running a physical office takes ongoing time, money and focus.

After your office is up-and-running, you discover the impact this new office has on hiring. You pay to relocate people who would be great additions to your company, but do not live near your new office. You are disappointed by good people turning down job offers because of the location. You have debates about “hiring the best person for the job” vs “hiring the best person for the job who is willing to relocate”. You have to limit hiring because you don’t have a spare desk available. You need to sublease a part of your new office space, because growth plans changed because revenue didn’t go as well as hoped – and now you have unused idle office space costing you money every month.

The benefits of no office

You dedicate more time, money and focus on the people, and the shipping product — simply by avoiding the financial costs, lead-time-delays and focus-distractions of setting up a physical office.

Phrased another way: Distributed teams let you focus the company time and money where it is most important — on the people and the product. After all, it doesn’t matter how fancy your office is unless you have a product that people want to use.

Having no office lets you sidestep a few potentially serious and distracting ongoing problems:

You don’t need to worry about signing a lease for a space that is too small (or too large) for the planned growth of the company. You avoid adding a large recurring cost (a lease) to the company books, which impacts your company’s financial burn rate.

You don’t need to worry if the location of the office helps or hinders future hiring plans. You don’t need to worry about good people turn down your job offers simply because of the office location. You can hire from a significantly larger pool of candidates, so you can hire better and faster then all-in-one-location competitors. For more on this, see .

Even larger companies like Aetna, with established offices, have been encouraging work-from-home since 2005 – because they can hire more people and also because of the money savings from real estate. Last I’ve heard, Aetna was saving $78 million a year by having people work from home. Each year. No wonder Dell and others are now doing the same.

You sidestep human distractions about office layout.

You don’t need to worry about business continuity if the office is closed for a while.

Sidestepping all these distractions helps you (and everyone else in the company) focus attention and money on the people and the product you are building and shipping. This is a competitive advantage over all-in-one-office companies. Important stuff to keep in mind when you ask yourself “Do we really need an office?”

(Versions of this post are on and also in the latest early release of my “Distributed” book.)

(Photo credit: Woodwards building Vancouver demolition 2 by Tannoy | CC BY-SA 3.0 via Wikimedia Commons)

“Distributed” Early Release #2 now available!

No Comments

Book Cover for DistributedLast week, we rolled out Early Release #2 for my “Distributed” book-in-progress.

EarlyRelease#2 (ER#2) adds two new chapters (Ch.12 one-on-ones and reviews; Ch.14 group socials and work-weeks). There are also a bunch of tweaks and fixes to the previous Chapters 1,5,6,7,9, including grouping related chapters into three sections.

This is one month after ER#1. You can buy ER#2 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already bought ER#1 should get prompted to update to ER#2. (If you don’t please let me know!). And yes, you’ll get updated when ER#3 comes out. For added goodness with an Early Release, your feedback get to help shape/scope the book!

Please let me know what you think of the book so far. Is there anything I should add/edit/change? Anything you found worked for you, as a “remotie” or person in a distributed team, which you wish you knew when you were starting? If you were going to setup a distributed team today, what would you like to know before you started?

Thank you to everyone who’s already sent me feedback/opinions/corrections – all changes that I hope make the book better. To make sure that any feedback doesn’t get lost or caught in spam filters, it’s best to email a special email address (feedback at oduinn dot com) although feedback via twitter and linkedin works also. Thanks again to everyone for their encouragement, proof-reading help and feedback so far.

Now, it’s time to brew more coffee and get back to typing. ER#3 is only a few weeks away!


Looking for a job in a “remote friendly” company?


Last week, I was looped into a discussion around “what companies are remote-friendly”, “how to find a job when you are a remotie” and “what companies have a good remote culture?”. These questions are trickier than one might expect – after all, everyone will have their own definition of “good”.

My metric for deciding if a company has a good remote culture is a variation of the Net Promoter Score:

“if you work remotely for a company, would you recommend the company to a friend?”

If you don’t find a remote job by word-of-mouth referral from a friend, how can you find a job in a remote friendly company? I have been accumulating a few useful sites for remote job seekers, avoiding all the “make $$$ working from home” spam out there. After last week’s discussion, I realized I haven’t seen this list posted in any one place, so thought it worthwhile to post here. Without further ado, in no particular order, here goes:

Some sites seemed more focused on freelancer / short term positions, which were less of interest to me at this time, so I’ve skipped most of those, and only listed a couple here for completeness:

Some of these sites are better then others in terms of how you search for “remote” jobs – for example, I think this needs a little UX love. But something is better then nothing. As I said in my presentation at Cultivate NYC, for job-listings posted on your own company website, simply having “remote welcome” in the job description is a great quick-and-easy start.

Hopefully, people find this list useful. Of course, if there’s any forums / job boards listing remote jobs that you see missing from here, please let me know.

(Thanks to @laurelatoreilly, @jessicard, @qethanm and @hwine for last week’s discussion.)

(Updated to add joduinn 05jan2016)
(Updated to add joduinn 05jan2016)

The “Distributed” book-in-progress: Early Release#1 now available!

No Comments

My previous post described how O’Reilly does rapid releases, instead of waterfall-model releases, for book publishing. Since then, I’ve been working with the folks at O’Reilly to get the first milestone of my book ready.

As this is the first public deliverable of my first book, I had to learn a bunch of mechanics, asking questions and working through many, many details. Very time consuming, and all new-to-me, hence my recent silence. The level of detailed coordination is quite something – especially when you consider how many *other* books O’Reilly has in progress at the same time.

Book Cover for DistributedOne evening, while in the car to a social event with friends, I looked up the “not-yet-live” page to show to friends in the car – only to discover it was live. Eeeeek! People could now buy the 1st milestone drop of my book. Exciting, and scary, all at the same time. Hopefully, people like it, but what if they don’t? What if I missed an important typo in all the various proof-reading sessions? I barely slept at all that night.

In O’Reilly language, this drop is called “Early Release #1 (ER#1)”. Now that ER#1 is out, and I have learned a bunch about the release mechanics involved, the next milestone drop should be more routine. Which is good, because we’re doing these every month. Oh, and like software: anyone who buys ER#1 will be prompted to update when ER#2 is available later in Oct, and prompted again when ER#3 is available in Nov, and so on.

You can buy the book-in-progress by clicking here, or clicking on the thumbnail of the book cover. And please, do let me know what you think – Is there anything I should add/edit/change? Anything you found worked for you, as a “remotie” or person in a distributed team, which you wish you knew when you were starting? If you were going to setup a distributed team today, what would you like to know before you started?

To make sure that any feedback doesn’t get lost or caught in spam filters, I’ve setup a special email address (feedback at oduinn dot com) although I’ve already been surprised by feedback via twitter and linkedin. Thanks again to everyone for their encouragement, proof-reading help and feedback so far.

Now, it’s time to brew more coffee and get back to typing.


A Release Engineer’s view on rapid releases of books

No Comments

As a release engineer, I’ve designed and built infrastructure for companies that used the waterfall-model and for companies that used a rapid release model. I’m now writing a book, so recently I have been looking at a very different industry through this same RelEng lens.

In days of old, here’s a simplistic description of what typically happened. Book authors would write their great masterpiece on their computer, in MSWord/Scrivener/emacs/typewriter/etc. Months (or years!) later, when the entire book was written, the author would deliver the “complete” draft masterpiece to the book company. Editors at the book company would then start reading from page1 and wade through the entire draft, making changes, fixing typos, looking for plot holes, etc. Sometimes they could make these corrections themselves, sometimes the changes required sending the manuscript back to the author to rewrite portions. Later, external reviewers would also provide feedback, causing even more changes. If all this took a long time, sometimes the author had to update the book with new developments to make sure the book would be up-to-date when it finally shipped. Tracking all these changes was detailed, time-pressured and chaotic work. Eventually, a book would finally go to press. You could reduce risk as well as simplify some printing and shipping logistics by having bookshops commit to pre-buy bulk quantities – but this required hiring a staff of sales reps selling the unwritten books in advance to bookstores before the books were fully written. Even so, you could still over/under estimate the future demand. And lets not forget that once people start reading the book, and deciding if they like it, you’ll get word-of-mouth, book reviews and bestseller-lists changing the demand unpredictably.

If people don’t like the book, you might have lots of copies of an unsellable book on your hands, which represents wasted paper and sunk costs. The book company is out-of-pocket for all the salaries and expenses from the outset, as well as the unwanted printed books and would hope to recover those expenses later from other more successful books. Similar to the Venture Capital business model, the profits from the successful books help recoup the losses from the other unprofitable books.

If people do like the book, and you have a runaway success, you can recoup the losses of the other books so long as you avoid being out-of-stock, which could cause people to lose interest in the book because of back order delays. Also, any errors missed in copy-editing and proofreading could potentially lead to legal exposure and/or forced destruction of all copies of printed books, causing further back order delays and unexpected costs. Two great examples are The Sinner’s Bible and the Penguin cook book.

This feels like the classic software development “waterfall” model.

By contrast, one advantage with the rapid release model is that you get quick feedback on the portions you have shipped so far, allowing you to revisit and quickly adjust/fix as needed… even while you are still working on completing and shipping the remaining portions. By the time you ship the last portion of the project, you’ve already adjusted and fixed a bunch of surprise gotchas found in the earlier chunks. By the time you ship your last portion, your overall project is much healthier and more usable. After all, for the already shipped portions, any problems discovered in real-world-use have already been fixed up. By comparison, a project where you write the same code for 18-24 months and then publish it all at the same time will have you dealing with all those adjustments/fixes for all the different chunks *at the same time*. Delivering smaller chunks helps keep you honest about whether you are still on schedule, or let you quickly see whether you are starting to slip the schedule.

The catch is that this rapid release requires sophisticated automation to make the act of shipping each chunk as quick, speedy and reliable as possible. It also requires dividing a project into chunks that can be shipped separately – which is not as easy to do as you might hope. Doing this well requires planning out the project carefully in small shippable chunks, so you can ship when each chunk is written, tested and ready for public consumption. APIs and contractual interfaces need to be understood and formalized. Dependencies between each chunk needs to figured out. Work estimates for writing and testing each module need to be guessed. A calendar schedule is put together. The list goes on and on… Aside: You probably should do this same level of planning also for waterfall model releases, but its easy to miss hidden dependencies or impact of schedule slips until the release date when everything was supposed to come together on the day.

So far, nothing new here to any release engineer reading this.

One of the (many) reasons I went with O’Reilly as the publisher for this book was their decision to invest in their infrastructure and process. O’Reilly borrowed a page from Release Engineers and invested in automation to switch their business from a waterfall model to a rapid release model. This change helps keep schedules realistic, as schedule slips can be spotted early. It helps get early feedback which helps ship a better final product, so the ratio of successful book vs unprofitable books should improve. It helps judge demand, which helps the final printing production planning to reduce cost wasting with less successful books, and to improve profits and timeliness for successful books. This capability is a real competitive advantage in a very competitive business market. This is an industry game changer.

When you know you want “rapid release for books”, you discover a lot of tools were already there, with a different name and slightly-different-use-cases. Recent technologies advances help make the rest possible. The overall project (“table of contents”). The breaking up of the overall project into chunks (“book chapters”). Delivering usable portions as they are ready (print on demand, electronic books + readers, online updates). Users (“readers”) who get the electronic versions will get update notices when newer versions become available. At O’Reilly, the process now looks like this:

Book authors and editors agree on a table of contents and an approximate ship date (write a project plan with scope) before signing contracts.

Book authors “write their text” (commit their changes) into a shared hosted repository, as they are writing throughout the writing creative process, not just at the end. This means that O’Reilly editors can see the state of the project as changes happen, not just when the author has finished the entire draft manuscript. It also means that O’Reilly reduces risk of catastrophic failure if an author’s laptop crashes, or is stolen without a backup.

A hosted automation system reads from the repository, validates the contents and then converts that source material into every electronic version of the book that will be shipped, including what is sent to the print-on-demand systems for generating the ink-on-paper versions. This is similar to how one source code revision is used to generate binaries for different deliverables – OSX, Windows, linux, iOS, Android,…

O’Reilly has all the automation and infrastructure you would expect from a professional-grade Release Engineering team. Access controls, hosted repos, hosted automation, status dashboards, tools to help you debug error messages, teams to help answer any support questions as well as maintain and improve the tools. Even a button that says “Build!”!!
The only difference is that the product shipped from the automation is a binary that you view in an e-reader (or send to a print-on-demand printing press), instead of a binary that you invoke to run as an application on your phone/desktop/server. With this mindset, and all this automation, it is no surprise that O’Reilly also does rapid releases of books, for the same reasons software companies do rapid releases. Very cool to see.

I’ve been thinking about this a lot recently because I’m now putting together my first “early release” of my book-in-progress. More info on the early release in the coming days when it is available. As a release engineer and first time author, I’ve found the entire process both totally natural and self-evident and at the same time a little odd and scary. I’d be very curious to hear what people think… both of the content of the actual book-in-progress, and also of this “rapid release of books” approach.

Meanwhile, it’s time to refill my coffee mug and get back to typing.


The USENIX Release Engineering Summit, Nov 2015

No Comments

The USENIX Release Engineering Summit 2015 (“URES15”) is quickly approaching – this time it will be held in November in Washington DC along with LISA2015. To register to attend URES15, click here or on the logo.

Given the two (!) great URES conferences were last year, I expect URES15 to be fun and informative and very-down-to-earth practical all at the same time. One of the great things about these RelEng conferences is that you get to hear what did, and did not, work for others working the same front lines – all in a factual constructive way. Sharing important ideas help raise the bar for everyone, so every one of these feel priceless. It is also a great way to meet many other seasoned people in this niche part of the computer business,

If you are a Release Engineer, Site Reliability Engineer, Production Operations, deal with DevOps culture or are someone who keeps wanting to find a better way to reliably ship better quality code, you should attend! You’ll be glad you did!

Note: If you have a project that you worked on which others would find informative, you can submit a proposal here. The deadline for proposals is Friday 04sep2015.

See you there!

“we are all remote” at Cultivate NYC

No Comments

It’s official!!

I’ll be speaking about remoties at the O’Reilly Cultivate conference in NYC!

Cultivate logoCultivate is being held on 28-29 Sept 2015, in the Javits conference center, in New York City. This is intentionally the same week, and same location, as the O’Reilly Strata+Hadoop World conference, so if you lead others in your organization, and are coming to Strata anyways, you should come a couple of days early to focus on cultivate-ing (!) your leadership skills. For more background on O’Reilly’s series of Cultivate conferences, check out this great post by Mike Loukides. I attended the Cultivate Portland conference last month, when it was co-located with OSCON, and found it insightful edge-of-my-seat stuff. I expect Cultivate NYC to be just as exciting.

Meanwhile, of course, I’m still writing like crazy on my book (and writing code when no-one is looking!), so have to run. As always, if you work remotely, or are part of a distributed team, I’d love to hear what does/doesn’t work for you and any wishes you have for topics to include in the book – just let me know.

Hope to see you in NYC next month.


“An Illustrated Book of Bad Arguments” by Ali Almossawi

No Comments

Most engineers I know are good at writing software. Some engineers I know are good at writing software for extraordinarily complex projects. Some have a brain that naturally, instinctively, just worked this way while some took all sorts of courses on algorithms and structured programming in university. Regardless, when you are coding by yourself, these important coding and problem solving skills are probably good enough. However, when you transition from coding-by-yourself to coding-in-a-team, there is the need for an additional skill.

Now you need the skills to explain to other engineers why your code, your approach, is best. Maybe even why your approach is better then an alternate approach being proposed by someone else. It is a given that you can write good code – that is how you got on the team in the first place. The question is: can you explain why your code is better then any other alternatives available?

This skill might sound deceptively simple, but in reality it has a few super complex twists to it. Someone once explained it to me as follows: smart people, given the same start point, the same goal, and the same restrictions/limitations, would come up with similar-enough solutions. Maybe not identical, because all humans are slightly different, but at least similar in approach – and would solve the same problem. Given that, if I was to find myself in a situation where very different solutions are being discussed, maybe we have different understanding of what the starting point is? Or what the goal is? Or what restrictions/assumptions/limitations to work within? Double checking all those can quickly uncover missing (or invalid) problem statements, restrictions/assumptions/limitations or goals. If it turns out that we’re trying to solve different problems, then of course the proposed solutions would be very different!

If your skills in logic/reasoning/conflict are weak, and if you skip the verification step, then it is easy to get frustrated by your inability to “convince others” of the greatness of your code or design. Maybe you cannot explain it correctly. Maybe the others don’t want to listen to you – because they feel equally about their proposals. Maybe both. If you are passionate about your code, but cannot have others understand its greatness, the situation can easily turn to frustration (“I give up – why does no-one understand how great my code is?”) or even anger (“This code is too important to give up on – what do they know anyway – they’re just idiots”). This shift from working together to find the best code/design (something which can be objectively measured or stress-tested, after everyone agrees on what they all think is “best”.) to destructively debating the other humans (using hard-to-measure qualities) is dangerous. Even if the “right” code is actually chosen in the end, the manner in which the discussion was carried out can be toxic to the group.

Like many other engineers, I never had any formal training in logic/reasoning/conflict at any of my universities, so I had to learn these skills on-the-job, or at workshops I found over the years. Sometimes it went well. Sometimes it did not. I’m still working on this. Being able to talk through differences of opinions, in a constructive way, is a crucial skill set to develop as you work with others during your engineering career. And essential for *everyone* in a team to have, in order for the *team* to work together effectively.

I stumbled across this book in 2014, loved it, and was then delighted and surprised to discover that it was written by someone I knew from when I worked at Mozilla!! Ali’s book is focused on describing the different types of arguments used in logic debates, with very easy-to-understand examples. This will help you keep your logic and reasoning honest when you are a discussion. It will also make it easy to notice when someone else switches to using “Ad Hominem” or “No True Scotsman” or “Genetic Fallacy” or “Straw Man” or any other of these types of arguments on you.

I think many many people, including myself, find the idea of a book about arguments to be daunting, almost off-putting, so I really like how the simple one-page description and the full page cartoon images make the whole topic area more approachable, appealing and in turn, very easy to understand in a non-threatening way. At 56 pages, it is a super short book, and almost 50% of those pages are cartoons, so you could expect to be done in a few minutes. However, each page found me sitting rethinking different conflicts over the years, so this took a lot longer to read then I expected. And I re-read it often. In addition to the printed version of the book, there is also a fun

Try it, I think you will like it.

ps: This is one of two great books that I really like in this area – both of which I think should be required reading for engineers. Both of which I just discovered I had not yet blogged about, hence this post. Watch for another post coming soon.

Older Entries Newer Entries