Right now it’s mostly a bunch of original1 cocktail recipes, but I’m looking forward to growing this garden over time and seeing what it becomes.
I’ve always liked the idea of a “home on the internet” - a place to put things I’d like to share, and a place to point people at to see what I’m interested in. When I was 12 or so, I had a “homepage” - a highly cringeworthy website with colourful text, a repeating pattern background, an HTML table of my favourite sound effects, and a recorded voice greeting that autoplayed on page load. That is thankfully no longer online, but there’s still something appealing about having my own little corner of the Web - a small mark on the world.
For a while it looked like social networks could be that home - you’d have your profile page and a space to share things, and all your friends would be there too! It didn’t really work out. From the start, your friend network was always fragmented between different sites that had no real interest in connecting the fragments. Then “social networks” turned into “social media”, sharing became about Building your Personal Brand instead of keeping in touch with friends, and everything got swamped by ads and memes. And renting space from your choice of billionaire edgelord didn’t feel much like home anyway.
At this moment at least, it feels like the answer should look more like a good old fashioned personal website - maybe even with a little cringe, although I promise to hold off on the sound effects.
This blog is part of that. But blogging has never come naturally to me. I feel pressure to Compose a Post that is a Finished Thought at a specific point in time. And the end result - a chronologically ordered mix of essays and journal entries - still feels like it’s missing something as a “home”.
I’m hoping a digital garden will fill the gap. It’s a place to browse, to share things that aren’t finished.
The idea of a digital garden is related to the wave of note-taking tools and workflows that got really trendy over the last few years. They go by names like “linked notes” and “personal knowledge management”, “second brains” and “tools for thought”. Proponents talk about “seeing emergent connections between ideas” and even “thinking better”. I find those ideas fascinating, but some of the more ambitious claims have never really come through for me.
I think it’s because a lot of the note-taking zeitgeist seems to take for granted that the main reason you’re taking notes is to produce some sort of writing for an audience. For some people, a digital garden is where they plant ideas, and cultivate them until they become essays. I do write occasionally, but I’m not a researcher or journalist where producing prose is my main form of output. And writing for an audience brings a lot of baggage - “is anyone going to care about what I have to say?” - and ambiguity - “what background knowledge can I assume? What register should I write in?”
A common response is “write to think”, or “write with yourself as the audience”. I see the value in that, to sort of trick myself into writing blog posts that I do then publish, but I don’t naturally think in prose or well-constructed arguments. When I do write to clarify or challenge my thinking, it tends to be a forest of branching bullet-points, not anything resembling an article. And anyway, most of my writing isn’t to think, but to remember.
Most of the time I’m writing something down to remember it, to refer to it later, or just so it can occupy less space in my brain. For me, that ends up looking like a fairly random collection of notes, without much organisation or relationship between them.
Some of those notes are of no interest to anyone else, like my weekly to-do list or when I last deep-cleaned my cats’ litter box, or deeply private, like takeaways from a therapy session, or just jumbled, half-formed thoughts. But some are halfway between “private braindump” and “finished blog post” - my comparative gin tasting notes, or surprising facts about bananas, or a cocktail I came up with but don’t have an Instagram-worthy photo of. Some are unpolished results of tinkering projects or experiments with generative art that I’m nevertheless proud of.
So I’d like to try putting some of those half-finished thoughts on display, and maybe growing some of them into more over time. I don’t really expect anyone to read them, but at least I can have a little corner of the Web that’s mine, and feels like how I think.
This is pretty similar to the ethos of Learning in Public, but that still seems a bit grandiose to me - I’m not trying to become an astrophysicist here. My notes are primarily about remembering, so I’m calling this “remembering in public”.
Cocktails have been around for a couple of hundred years (estimates vary) and most cocktail recipes are simple variations of other recipes, with an ingredient substituted or proportions tweaked. This means parallel evolution is extremely likely, and any recipe will have been “invented” multiple times by different people. (In fact, many classic cocktails like the Manhattan and Martini have many competing and equally apocryphal origin stories!) So my cocktails are “original” in the sense that I didn’t follow a recipe when first making them, but I don’t claim that they are novel. ↩
Since I started this blog I moved country, got married, became a US citizen, adopted two cats, bought a home, sold a startup, worked at several companies and did three stints as an independent contractor… and somehow didn’t yet manage to write about any of those things on this blog!
I started this blog in 2009. I wanted a space to dump occasional words and photos, without thinking at all about the mechanism, so I just set up a Tumblr and called it a day.
I don’t remember what motivated me to migrate off Tumblr five years later. I vaguely recall having to use a clunky WYSIWYG editor to post, but jumping through hoops to add custom CSS or JS might have been the trigger. I still didn’t want to build everything from scratch, so I found Octopress, which was somewhere in between a Jekyll fork and a bundle of styles and plugins. That gave me a nice balance: a polished theme, reasonable defaults, social network integrations, while still giving me direct access to the HTML when I needed it.
Ditching Tumblr meant I also needed somewhere to host the blog. I still didn’t want to manage infra, so GitHub Pages seemed like a great fit. Unfortunately, in 2014, GitHub Pages wasn’t the flexible platform it is today. It was a great way to host static HTML, but its support for static site generators was limited to running a specific version of Jekyll, with pre-baked config. My Octopress setup wasn’t going to work.
A common workaround was to run Jekyll locally and commit the generated static site to a separate branch of the same Git repo, then use GitHub Pages to serve that branch. The manual build step reminded me uncomfortably of FTPing files to a VPS, so instead, I set up a CI build on Codeship to automatically build, commit and push the static site each time I pushed a new commit to the blog.
At this point I had - of course - spent more effort setting up a blogging workflow than I had actually writing blog posts. So it goes.
On the plus side, around this time, a couple of my posts - notably What programming is like - got read and appreciated by a lot of people, which made it all feel worth it!
In 2014 to start an Octopress blog you forked the original Octopress repo - a heavily-customised Jekyll setup - and added your own commits on top. That meant you were pinned to certain versions of certain gems. At some point I found that due to incompatible changes those gems and in Ruby itself, I couldn’t build the blog any more.
A static site generator that I couldn’t run meant I couldn’t post any more, or even change anything. And Octopress was pretty long in the tooth by now. The author was talking about a “full rewrite”. I didn’t feel like being tied to a monolithic framework when I’d chosen Jekyll in the first place for simplicity and control.
I decided to migrate off Octopress onto mainstream Jekyll, and make sure I was running on the latest version, so this couldn’t happen again. (Ah, the innocence of youth.)
I had a post I wanted to publish, and didn’t want to rebuild my entire blog layout and styles from scratch first. So I imported the layout, styles, and bunch of plugins from Octopress. I now had a plain Jekyll blog, which looked like an Octopress blog.
Among other highly questionable decisions, importing the Octopress styles ended up meaning downloading the minified stylesheet from the live blog and committing it to the repo. This was the whole 40kB stylesheet with whitespace and newlines removed - my commit comment at the time called this “completely unmaintainable”. Naturally I ended up wanting to add some custom styles anyway, so I tacked them onto the end of the unmaintainable 40kB blob.
The (admittedly somewhat baroque) multi-branch CI deploy + GitHub Pages hosting basically still worked, so I didn’t change that, besides fixing some minor bit-rot.
“History doesn’t repeat itself, but it does rhyme.” Again I found my blog wouldn’t build. This time it was an accumulation of unaddressed issues - not too surprising as all the libraries and infrastructure I’d depended on had been moving on for eight years.
My CI solution had stopped working. GitHub Pages had got a lot more capable in the meantime, now supporting arbitrary Docker containers via GitHub Actions, so I switched to that, but encountered errors from Jekyll. I tried upgrading to the latest version of Jekyll to find it had made several breaking changes which broke my build in other ways.
I flailed for a while trying to debug this, but gave up sometime around discovering that Jekyll was sometimes trying to apply my baseurl
prefix twice.
My Octopress blog had supported lots of nice things - category pages, a fancy stylesheet with collapsible sidebars, embedded widgets showing my recent tweets and Pinboard bookmarks. It was blinding me to the simplicity of my actual task - take 18 Markdown files and publish them online.
I started a new Jekyll1 project from scratch, imported the Markdown files from the old repo, and had everything working in a couple of hours.
For hosting, I had heard good things about Netlify, so I wanted to try them out. They dealt seamlessly with my (now extremely unremarkable) Jekyll build. Setting up a custom domain took a couple of tries but is now working fine.
My old blog lived at blog.samstokes.co.uk/blog
. I don’t remember why I thought the /blog
suffix was a good idea, particularly in combination with the blog
subdomain. It was probably an SEO best practice I read about somewhere.
While I was nuking everything from orbit and starting again, I thought it would be a good opportunity to take advantage of one of the silly domains I had registered a while back. So my new blog is Five Eights, at five-eights.com
.
Why Five Eights? Well, it’s almost as good as five nines.
I set up a bunch of redirects - from the old domain to the new, and from old URL structures to new - so hopefully, all my URLs from the last 13 years should still be valid. I doubt anyone cares what I thought 13 years ago about the UK’s Digital Economy Act, but I’m doing my infinitesimal part to keep the Web from eroding.
As a remarkable coda to the above catalog of breakages over the last 13 years, the Disqus comments I’ve been using since 2009 are still working fine. That’s actually pretty impressive, given I’m pretty sure most of the platforms they integrated with have collapsed over that time. On the other hand, Disqus was acquired by a holding company in 2017, and their privacy practices seem pretty shady.
In any case, the new blog doesn’t support comments. In 2009 blog comments seemed like a great way to start a conversation, but now I’d rather move the discussion to other, more real-time platforms, and keep this my own little corner of the Web. And this way I don’t have to include Disqus’s third-party Javascript (and, no doubt, tracking cookies) on my site.
Some of my older posts had some good discussions, so I’m working on importing those comment threads from Disqus.
Update 2023-02-20: I got the Disqus import working, so comments on previous posts are showing up now. Definitely happy to have preserved a record of that conversation.
Since a couple of my posts got a fair amount of readership, analytics has become important to me. For now I’m sticking with Google Analytics because I’m familiar with it, but I’m thinking of looking into more privacy-focused solutions like Plausible.
Notably absent from this post is any sort of New Year’s Resolution to Blog More This Year, because those always look rather silly in retrospect, particularly when followed by another multi-year hiatus. I’d like to write more, but I think it’s fair to say at this point that regular blogging isn’t something I’m wired for.
I do like the idea of thinking in public, though. There are plenty of things I write down - musings on concepts, half-finished theories, cocktail recipes - that I’d love to discuss online.
This seems closer to notions of a digital garden - where the focus is less on a chronologically ordered list of timestamped, evergreen, finished essays, and more on an evolving set of “living documents”. Part of my difficulty with blogging is that psychological barrier of needing to have a finished thought at a particular time. I’m interested to see if I can evolve this site into a digital garden. Hopefully there won’t be any more ground-up rewrites needed first!
I briefly considered switching blogging engines, to something trendier like Hugo or Ghost, or even something written in Rust, my current favourite hobby language. But all of this bit-rot has reminded me of the benefits of remaining close to the mainstream. Jekyll works, and it’s widely used. At least when it breaks again, I’ll be in good company. ↩
The obvious answer: making the startup succeed. Making something people want, growing revenue, landing that big customer or the next funding round. But everyone in a startup cares about those things. What should engineering, specifically, care about, in order to make the startup succeed?
There’s loads of advice online about how to be effective founders, how to refine your business plan, how to validate your product thesis. But I haven’t found much written about effective engineering for early-stage startups. This is a shame, because for most startups, engineering is pretty important. It’s how you build your product - not to mention probably the majority of your headcount - so the way you go about it will affect the kind of product you build, and even the way your company operates.
When you do find advice about startup engineering, it tends to be limited to cursory admonishments to “move fast” and “get shit done”. These slogans aren’t wrong, but they are overly dismissive and reductive. They suggest that the way you do things doesn’t matter at all: anything which doesn’t look like immediate, visible progress must be a waste of time. This all but rules out introspection and improvement. It offers no help with answering questions like:
I’d like to suggest a more constructive theory of what an early-stage startup engineering team should care about, to help answer those kinds of questions.
You might think asking what an engineering team cares about is fluffy and impractical. But what your team cares about has a big impact on the way you operate: where you focus your effort, which activities you encourage and which ones you try to minimise.
Notice I didn’t say “should spend a lot of time”, but “will”. If you talk about something at lunch, stick posters on the wall about it and chart it on plasma screens, people will inevitably start to prioritise it. If they come up with an idea or notice some problem, they’re more likely - even subconsciously - to work on the idea rather than dismiss it if it’s relevant to the charts on those screens.
So what should a startup engineering team care about?
Clearly this will depend somewhat on the business you are in. If you handle payments, your top priorities are probably security and data integrity, narrowly followed by uptime. If you’re making a game, maybe users will tolerate the occasional outage, but fast UI is essential to keep them playing. But I think for most early-stage1 startups, there are a few core engineering values in common: nimbleness, learning, sustainable speed, and simplicity.
Most of all, an early-stage startup needs to be nimble.2 You don’t yet know yet what will make your product a success. You have a current best guess, derived from the product vision and your learnings so far; but that best guess is likely to change frequently and with little warning. You might need to support a new use case you never designed for, or abandon a feature that last week you thought would be essential.
Being nimble isn’t just shipping features quickly, although that certainly helps. It’s about being able to change direction quickly. It’s a property of how quickly you can come to a shared understanding of the new direction, and how quickly individuals shift gears to react to the change. And it’s a property of your codebase, how deeply your assumptions are embedded in it, and how hard it is to change.
An early-stage startup is trying to realise a product vision, while continually validating and adapting that vision according to market reality. That requires access to high-quality data about what people want, how they interact with your product, and how they react to the changes you push out. You need that data quickly - being nimble doesn’t help if you’re acting on stale information!
Depending on your business, engineering may be more or less involved for actually gathering the data. If you have a small number of users, the best way to learn is just to talk to them, or read their feedback. If you have millions of users, talking to them (while still valuable) will yield at best a small, biased sample, but you can learn a lot from metrics and A-B testing.
Regardless, engineering must deliver a product you can confidently learn from. If you push out a feature with a highly visible bug, most of the feedback you get will be about that bug, not about how well the feature solves users’ problems. If you’re trying to understand your high churn rate, but you have frequent outages or periods of slowness, you can’t tell if users are leaving because they don’t need your product, or just because they don’t trust it to work properly.
Overnight success is a myth.3 What you consider success is a personal matter - helped thousands of people, had a lucrative exit, made the world a better place - but it’s probably going to take years. Your initial launch is not success; nor is your first funding round. A startup is a marathon, not a sprint.4
This implies that while reaching that initial launch quickly is important, it’s not enough. If rushing out the initial launch creates a pile of technical debt that slows down every subsequent new feature, or makes you less nimble, that might not have been a good tradeoff. Likewise, if the team is constantly firefighting production issues, that can destroy your focus, not to mention morale. Maybe you can raise a bunch of money off that initial launch, and hire a bigger team to offset the technical debt and fix all the bugs - but first Fred Brooks would like a word with you.
All that said, raw speed is really important too. Moving fast makes you more nimble, and it lets you learn faster. So the key is to find ways to move fast sustainably: so that you keep moving fast even as your team and codebase grow.
Balance is needed, because you don’t want to optimise for a future that never happens. If your startup runs out of money and dies, it no longer matters how quickly you can iterate. On the flip side, it’s equally pointless to fly over the first hurdle only to burn out before the finish line.
If your engineering team cares about nimbleness, learning, and sustainable speed, it helps a lot to also care about simplicity.
If your codebase and architecture are simple and easy to understand, not only can you change it more quickly - if your underlying assumptions change, simple, cleanly factored code is also easier to throw away, and replace with something that fits the new requirements. A simple product is easier to learn from than one cluttered with features and settings.
Valuing simplicity means ongoing effort to keep things simple. Every new feature naturally creates complexity, and restoring simplicity takes effort and discipline. It means dropping features that are unused or hard to maintain; going back and addressing the workarounds and special cases. In extreme circumstances, you may need to push back on the product direction if it would make the system too complex.
Keeping things simple also means resisting overengineering. A quick, messy, unscalable solution might still be simpler than the new abstraction or infrastructure you might need for the “right” solution. Premature generalisation is a great way to make your system harder to understand and change. Simplicity means deferring that work until you’re confident you need it, but also having systems in place to ensure you’ll revisit the quick hack once you do need to.
Valuing simplicity sounds, counterintuitively, like a lot of work, but it’s one of the keys to sustainable speed: that natural creep toward complexity is one of the things that slows you down. Resisting the creep is, of course, an unwinnable battle - your system will never again be as simple as your first version - but fighting it anyway lets you retain some control over your destiny. Investing in tools and automation can reduce the effort involved in keeping things simple.
In part 2 of this post, I will discuss common engineering practices like code review, automated testing, continuous delivery and crunch mode. I’ll show how to evaluate practices by their effect on nimbleness, learning, sustainable speed and simplicity.
Photo credits: Doug Weston, Jon Hurd, Tim Shields, and Mackenzie Black.
By “early stage” I mean that one or more of the following is true: still searching for product/market fit, team is max 10 people, raised at most a seed round or possibly Series A. My experience is in web-based software startups, so what I’m saying may be less applicable outside that space. ↩
You might also call this quality “agility”, but the term “agile” has become strongly associated with the “Agile movement”, which has diluted the original meaning of the term with a plethora of methodologies, certifications and process consultants. The spirit of the original Agile manifesto is in line with what I mean by nimbleness, but Scrum and Extreme Programming are certainly not what I’m advocating here. ↩
I was trying to find the original source of the quote “It takes ten years to become an overnight success”, but I came up with an entire page of variations by different people. I view this as at least corroborating my claim that overnight success is a myth. ↩
The Scrum methodology uses the term “sprint” as the primary unit of planning, where the team conducts a series of sprints, one sprint beginning immediately after the previous sprint ended. This seems to me a severe form of cognitive dissonance. ↩
Home bartending is a very satisfying hobby, not least because you’re working with booze. It offers much of the creativity and sensuality of cooking, but with a fraction of the prep and cleanup work. And there’s a thrill to making a tasty, classy drink at home, for around $4 instead of $8 plus tip. And it doesn’t take much to get started. Your average kitchen is probably only missing four cheap pieces of equipment to make a wide variety of cocktails.
If you want to get started for yourself, the following are my recommendations for what to get. They also make a nice gift package: buy someone these, wrap them up as a set, and they’ve got four fewer excuses not to start mixing their own drinks.
(Prices are taken from Amazon at time of writing, and for guidance only.)
You need a shaker to make shaken1 drinks - Margarita, whiskey sour, Last Word, etc. You’ll see two different kinds: the Boston shaker, which is just a large metal cup that you use in combination with a mixing glass, and the three-piece shaker, which has a cup, strainer and cap all in one. The three-piece is iconic, but I prefer the Boston shaker as more effective and versatile (and also cheaper).2 It’s especially better if you’re making a larger quantity: you can pour ingredients into two glasses, then use the same shaker to shake both.
My recommendation: Winco Stainless Steel Bar Shaker, $5.69
You need a bar spoon for making stirred1 drinks - Martini, Manhattan, Negroni. You can use a normal teaspoon or tablespoon, but you’ll knock the hell out of the ice, leading to more dilution than you want, and unsightly chunks of ice floating in the drink. A bar spoon has a long handle and a thin, flat bowl so that you can swish it around the outside of the mixing glass, in the process bringing more of the liquid into contact with the ice.
It’s hard to go wrong here, so something cheap with good Amazon reviews will do fine. If you’re buying as a gift, though, I recommend the Aero from Standard Spoon for its beautiful industrial design and precisely balanced weight.
My recommendation (budget edition): Hiware Iced Tea Spoon, $7.99 (I guess you can make iced tea with it too)
My recommendation (gift edition): AERO Cocktail Spoon, $35
For measuring ingredients, you’ll need a jigger.3 You might have seen the iconic “double cone” design, which looks cool but is rather hard to use without spilling. I use one that’s a miniature version of a lined measuring jug, which makes it very easy to measure out various amounts.
My recommendation: OXO Steel Angled Measuring Jigger, $6.95
Whether you’re stirring or shaking, you’ll need a strainer to pour out the drink without the ice coming along too. The classic design has a circular metal spring that lets it fit in glasses of various diameters, and a rubber grip so you can hold the strainer in place with the same hand as holds the glass.
My recommendation: OXO Steel Cocktail Strainer, $6.99
That’s pretty much all the equipment you need, but there are a couple of items worth discussing:
You can use just about anything here, so you’ve probably got something in the cupboard that will work. If you’re using the Boston shaker, though, you’ll get a better seal if the glass is about the size of a standard pub pint glass. Also, you’ll eventually end up breaking a glass in the process of freeing it from the shaker after shaking. I buy cheap, plain, sturdy pint glasses, which are hard to break and easy to replace.
My recommendation: ARC Luminarc Pub Beer Glass (Set of 4), $10.09
Since I started making cocktails I’ve become very opinionated about juicers. Lots of cocktails call for lime or lemon juice, and you’ll be doing a lot of juicing. I grew up with the type of juicer that has a lemon-shaped core surrounded by a dish to catch the juice; that type is really labour intensive for a large quantity of juice. Since I discovered the type that uses two long handles with a gear mechanism to bring leverage to bear, I’ve never looked back. It won’t get quite as much juice out of each fruit as the core-and-dish type, but it’s so much quicker and easier that it’s worth buying one extra lime every now and then.
My recommendation: Chef’n FreshForce Citrus Juicer, $15.91
You might be wondering why I’ve not suggested buying any actual alcohol. That’s because it depends what you want to make.
The tricky part about home bartending is that lots of cocktail recipes require just one bottle you don’t have. An Aviation sounds delicious, but where can you buy crème de violette? You’d love to try a Martinez, but what is Old Tom gin anyway?
To solve this, one school of thought is to buy enough bottles that you can make a good variety, and expand from there. For example, the 9-Bottle Bar and 12-Bottle Bar. These are good ideas, but a bit intimidating (not to mention expensive) for getting started.
My preferred approach is the 3-bottle bar. Pick a drink you like, look up a recipe, and buy the three or four bottles you need to make just that drink. I started off with bourbon, sweet Italian vermouth, and Angostura bitters, to make Manhattans. Congratulations: you can now drink that whenever you feel like it!
Next, repeat for a drink your roommate, significant other, or best friend likes. Now you can make them a drink whenever you want!
If you’re lucky, those two drinks already had one bottle in common. If not, browse around a bit for recipes for which you’re just one bottle shy. Recipes also have quite a bit of overlap, so once you’ve bought a couple more bottles, you’ll start to find you can make quite a variety of things.
Once you get started, you’ll want to look up recipes, ingredients, techniques and so on. It’s easy to find good-quality information online - for example, I refer often to the cocktail section of Serious Eats for recipes - so I’d lean on Google before shopping for cocktail books. That said, if you want to learn more about the history, variety and lore of cocktails, David Wondrich’s Imbibe is a very entertaining hybrid of recipe book and documentary.
Now go make something delicious!
Photos are licensed CC BY-NC-SA. If you use them, please provide a link back to this blog post.
There are three main techniques for making cocktails: shaking, stirring and building. Without getting into the whole James Bond thing, it’s less of a stylistic preference and more about what’s in the drink. If your drink is all alcohol, stir it with ice and strain. If there’s a substantial juice component, shake it with ice and strain. A few drinks are “built” directly in the glass they’ll be drunk out of, such as the Mojito, involving muddled herbs, or the gin & tonic, where you want to preserve the fizz. Shaken vs stirred is an ongoing controversy, but this article breaks it down well. ↩ ↩2
The Boston shaker looks like it takes a lot of work to keep the two halves together, but actually the chilling from the ice forms a vacuum seal. ↩
Measure your cocktails. Free pour looks cool, but the whole point of a cocktail is balancing the ingredients to complement but not drown each other. If the proportions are off, you’ll get a sugary mess, or something that doesn’t quite taste like anything. Besides, part of the creativity is adjusting the proportions to the drinker’s taste; but good luck doing that if you don’t know how much of everything is going in! ↩
These sorts of statements often come up when discussing schedule or scope. They frame the decision being made as a choice between two approaches: you can do it fast or do it well, but not both.
I believe that “speed vs quality” is a false choice. Each time we naïvely frame a decision this way, we do a disservice to ourselves and to our users. Sometimes the best way to run fast is to be confident in your footing.
Doing things well helps you move faster:
Conversely, moving fast helps you do things better:
“Speed vs quality is a false choice.”
“Do it well or do it fast” is asking the wrong question. The right question is where to focus your effort. As I’ve argued before, it can help to frame that decision around confidence:
Of course you want to move fast, but fast isn’t enough. You want to be confident that what you’re shipping is useful. You want to be confident that you’ll know if it breaks. You want to be confident that you’ll be able to keep moving fast in the future.
To put this in the form of a slogan: “Move fast with confidence”.1
Consider running across rocky ground. If you sprint headlong, as fast as you can, you’re likely to trip over a rock, or twist your ankle. Slowing to a walk isn’t ideal either. The trick is to be aware enough, and take just enough care, to be confident in your footing. Look out for jagged rocks or crumbling earth where you need to step more carefully. Identify those patches of clear, solid ground where you can afford to sprint.
I don’t claim this is anything novel. I’m quite sure that high-performing teams are already thinking along these lines. But I’ve seen too many engineering cultures whose implicit slogans were just “move fast” or “be safe”; overly reductive, with predictably suboptimal results. “Move fast with confidence” acknowledges the need for balance: cutting corners where you can afford it, while taking care in those places that help ensure you can keep moving fast later on.
“The trick is to be aware enough, and take just enough care, to be confident in your footing.”
But what about quality? The slogan doesn’t address it. Should you just stop talking about quality, as my earlier post provocatively suggested? In product development we want to do more than just reach the finish line. Discussing risks and deadlines is an effective way to communicate, but it’s also cold and rational. Where do intuition, creativity and vision fit in? What about pride in a job well done?
I think the way to reconcile this is to recognise that quality, as Julie Zhou has eloquently argued, is a bar, not a tradeoff. Julie concludes: “quality happens because it cannot happen otherwise”.
Quality is a cultural value. Quality means different things to different people. Teams will vary on how they define quality, and how highly they prioritise it against other factors. But wherever you and your team set the quality bar is part of the cultural context in which you make decisions about schedule and scope.
Your cultural quality bar isn’t something to be traded off - in general. There will always be specific cases where you do need to accept a lower level of quality in order to achieve some goal. Sometimes you need a demo ready for the conference announcement. Sometimes you don’t understand the use case well enough yet to know what solving it with high quality would even look like. But those cases should be exceptional. If you’re lowering your quality bar as a matter of course, then your culture doesn’t actually value quality as much as you think it does!
You can talk about quality within the framework of confidence. Rather than “should we do it well or do it fast?”, you might more productively ask:
“If you’re lowering your quality bar as a matter of course, then your culture doesn’t actually value quality as much as you think it does!”
You might have noticed a theme in my bullet points above: a lot of them are about how your decision now will affect the next time. It’s pretty rare that you just want to ship something, once, and then never do any more work. You ship one feature, you move on to the next feature. You ship an initial version, you learn from user feedback, and you iterate. Your strategy - or the market - changes, and your product changes with it.
Moving fast with confidence is more than just being confident in hitting the immediate goal: it’s also building confidence that you can keep hitting goals - and confidence that you can adapt as goals change. Often a little effort now can prevent you from getting stuck in the future.
“It’s pretty rare that you just want to ship something, once, and then never do any more work.”
Of course, it’s a balance. If you focus too much on the future, you can get stuck in analysis paralysis, or waste time optimising for something that never ends up happening. Striking the right balance has to be a team effort, requiring a diversity of skills, and a shared understanding of the quality bar. Framing the discussion around confidence can provide a shared language for people with different skill-sets and risk tolerances.
Because it’s not just you running across that rocky ground. It’s you and your team. One of you might have run across this field before, and knows which shortcuts are actually swamps. One of you has a compass. Another has a map.
Only by working together can you keep moving fast with confidence.
This post is the third in a series. If you found it interesting or provocative, you may enjoy the other posts:
Thanks to Greg Bayer, Erran Berger, Conrad Irwin, Lee Mallabone, and Jeff Wang for reviewing a draft of this post.
Photo credits: Emilio Vaquer (boot on rocky ground, cropped to remove border), and Jason Taellious (the word "quality" painted on a wooden board, cropped from original).
This might be reminiscent of another famous slogan, “Move fast and break things”. A slogan can’t capture the full nuance of an engineering philosophy, but it can be an effective and memorable way to convey the spirit of what’s important. Facebook’s slogan was, unfortunately, widely taken to mean “things being broken is fine”. The intended subtext was really: move fast and don’t be afraid to break things, because you can move fast to fix them too – which is a lot less irresponsible. Maybe because of the misinterpretation, they later abandoned “Move fast and break things” in favour of “Move fast with stable infra”. ↩
Engineer: “We’re going to have to cut feature X if we want to launch on time. It’ll take two months to build, but the deadline’s in a month.”
Product manager: “That’s a shame - our competitors have that feature. I thought you demoed it last week?”
Engineer: “That was just a prototype. We can’t ship it to users.”
Product manager: “Why not? It looks awesome, and it worked fine in the demo.”
Engineer: “Sure, it basically works, but the code is a mess, and we haven’t done any testing. It’s not ready to ship.”
Product manager: “It doesn’t have to be perfect. We need to move fast now - we can always fix it later.”
Engineer: “That’s what you said last time. Fine, we’ll ship the prototype… again. Don’t blame me when it breaks.”
Each person is trying to manage the risks they know about, and do what’s best for the business. Despite the best of intentions, these conversations can feel frustrating for both parties. It’s easy to feel like the other person doesn’t understand your concerns, or is stubbornly clinging to their own principles. The optimal decision is probably somewhere in the middle, but this kind of discussion rarely gets there.
I previously argued that we should stop using the word “quality” because it tends to polarise conversations. Now I want to offer an alternative. I propose that most conversations about schedule or scope would go better if they were framed instead around confidence.
Talking about confidence gives a way to educate each other about the risks you’re aware of, and how worried you are about them. What if the same conversation went this way:
Engineer: “We’re going to have to cut feature X if we want to launch on time. It’ll take two months to build, but the deadline’s in a month.”
Product manager: “That’s a shame - our competitors have that feature. It might lose us some power users, and those are exactly the users we want feedback from to be confident in our product thesis. Are you sure there’s no way to fit it in?”
Engineer: “We have a working prototype of X, but it’s pretty slow - I wouldn’t be confident in putting it in production. It could bring down our servers.”
Product manager: “X isn’t a core feature, so we don’t need 100% confidence in it. Can we get to 80% confidence in a month?”
Engineer: “I guess we can run the prototype on its own servers so it won’t harm the rest of the product. But it could break, and we won’t have monitoring, so we’ll have to spend time after launch bringing it up to scratch.”
Product manager: “That’s okay, we can afford that time after launch, and if it breaks we can handle the support calls.”
Engineer: “I’ll remind you that you said that! We’ll get the prototype ready.”
Confidence is something that translates well across job functions, and is something everyone can reason about. Engineers need confidence that the system will work, and that they’ll know when it breaks. Product managers need confidence that someone actually wants what you built, and that they’re getting reliable user feedback. Engineering managers need confidence that the team is tracking to plan, and that other teams they depend on are on board with the approach.
Talking about levels of confidence means you can actually have a reasonable discussion about tradeoffs. The “80%” numbers can be arbitrary, but everyone understands the difference between 50% (a coin toss) and 99% (pretty certain).
Talking about confidence doesn’t mean there will no longer be disagreements. People will have knowledge of different aspects of the decision being made, which will lead them to differing levels of confidence. People will have different risk tolerances: how confident they prefer to be.
In the imaginary conversation above, the engineer might have insisted that putting a prototype in front of users was irresponsible. The product manager might have objected that an unmonitored, crash-prone version of the feature wouldn’t teach them anything about user behaviour. Front-line support might have objected to the impending deluge of complaints about the unstable feature.
These are good disagreements. They are the reason you had the discussion in the first place, rather than just having one person decide. When you launch, you want to know which aspects of the product you are confident in, and where you have gaps; not to push something shiny out the door and only then discover nobody tested it under load.
To make an informed decision, you want the people with the most knowledge of each aspect to assess their level of confidence. And you want people with different comfort levels to be bought into the tradeoff you’re making. When the servers do catch fire, you don’t want the people holding the fire extinguishers to feel like they’re cleaning up someone else’s mess, but that you made a decision together to prepare for some fires in order to gather some crucial feedback.
Taking different risk tolerances into account is tricky, because you have to do it consciously and deliberately. If you let nature take its course, the more risk-averse people will tend to be overruled, or even ignored. If someone more risk-tolerant is making the final decision, one of the risks they’re often prepared to accept is pissing off the risk-averse people!
Of course you can’t delay every decision until everyone is happy. Rarely is everyone involved going to reach their preferred level of confidence - there just isn’t that much certainty to go around! You might have to be okay with shipping the prototype, or with dropping the feature. But if you can take more people’s levels of confidence into account, you’ll reach a better decision - and one with more people bought into it!
This is especially important because risk tolerance is situational. One feature might be business critical, another might be fine with known bugs. One small change to a creaking system might be the final straw, another might be a great opportunity to put in that fix you’d been talking about for months.
The next time you’re in a conversation about schedule or scope that seems to be going nowhere, or where it seems like not every voice is being heard equally, try reframing the conversation around confidence and risk. Instead of absolute terms like “good enough” or “launch blocker”, present alternatives; talk about the costs and benefits.
It’s really hard to answer these questions by arguing over absolutes, but you can make surprising progress with a few minutes of empathetic conversation about what you’re trying to achieve.
Of course, maybe the answer is “we’re not all that confident, but that’s fine”. Doing anything interesting requires some level of risk. Individuals and companies vary in how much, and what kinds of, risk they are willing to take.
However, I believe that building a culture that values confidence leads to being better at speed, quality, and all the other things you value.
In subsequent posts I plan to back up that claim, via applying this framework to some traditionally frustrating areas of software engineering, such as “productionisation” (aka turning a half-finished prototype into a half-finished product), premature generalisation, and whether startups should bother with unit tests.
This post is the second in a series. If you found it interesting or provocative, you may enjoy the other posts:
Thanks to Paul Biggar, Peter van Hardenberg and Dorothy Li for reviewing a draft of this post.
Photo credits: RDVPHOTO (silhouettes pointing at a satellite photo), and clement127 (Lego man running from mushroom cloud).
]]>Don’t get me wrong. I want to build software I can be proud of. I want to be part of teams that build great products. I aspire to craftsmanship. What I dislike is the word “quality”, and how it tends to polarise conversations.
A lot of factors go into software quality. Good software is fast. Good software is maintainable, readable, scalable, and well tested. Good software has attractive UI and intuitive interactions. Good software has no bugs, or at least no serious bugs, or at least no bugs that our customer support team can’t work around.
In practice, quality means whatever you want it to mean. To a fan of unit testing, quality means investing in unit testing. To a designer, quality means beautiful screens and careful interaction design. To a customer support manager, quality means all bugs are documented and the serious ones have an ETA.
If you tell people that your estimates are higher than expected because you want to do a quality job, some of them will think you’re spending time refactoring and writing tests, and some will think you’re going pixel-by-pixel in Photoshop. Some of them will end up disappointed.
Not only is quality subjective, it’s also impossible to quantify. It’s not clear how much time spent focussing on quality will stop those production issues, or how much less quality you’ll get if you do a rush job. Do you want your products to be 50% better designed, or will 30% do?
That makes it hard to track quality over time, but that isn’t the real problem. The problem is that quality leaves no room for discussion. Debates boil down to “do you want it done properly or not?”
Often this comes up in the form of a decision between quality and speed:
- “We’re a startup - we don’t have time for testing and code review, or we’ll miss the market window.”
- “We’ve been moving too fast and breaking too many things. We need to slow down and be more careful.”
If you frame it this way, you can choose speed or quality, but you can’t have both.
It’s not generally possible to argue against quality. It’s a rhetorical word, loaded with positive connotations. That means discussions about improving quality tend to come across as criticism:
“I can’t do a quality job on this timeline.”
“I can do a bad job, but why would you want that? You must be wrong about when you need this.”
“I’d like us to invest in unit testing - it’s a great way to improve the quality of your code.”
“Our code is terrible right now.”
“We’re having too many production issues, so we need to focus on quality.”
“The team didn’t do a good enough job. You didn’t do a good enough job. The prod issues might be due to product management insisting on impossible requirements, or that reorg we had a month before launch, but can we all agree that a quality product wouldn’t have these problems?”
So quality is hard to argue against, hard to nail down, and it’s all or nothing. Not surprisingly, some people respond by rejecting the concept of quality as unhelpful. Quality is a luxury we don’t have time for; better to ship something than polish it forever.
What “we don’t have time for quality” really means is: “I’m not sure what you mean by quality, but I know moving fast is more important than what I mean by quality”. Unfortunately, all too often what gets said is “we don’t have time for quality”. Framed in these terms, there’s no common ground, and we take up sides, each frustrated at the other’s inability to see the big picture.
The pattern repeats at larger scales. An engineering team can split into “pragmatic hackers” (“we just get the job done”) and “serious engineers” (“gotta clean up the mess those cowboys make”). Or engineering can lose trust in the business (“don’t those suits understand how short-sighted they’re being?”) and vice versa (“ignore the nerds grumbling about technical debt, they’re always saying the sky is falling”).
What’s really going on here is a failure of communication. This talk of factions might sound dramatic, but I bet you’ve heard variants of the above at the watercooler or the bar. Every little joke and sarcastic aside serves to undermine trust and communication.
The reality is that doing anything in the real world involves difficult decisions in the face of constraints. Everyone has more exposure to some of those constraints than others. The product manager sees customer feedback, feels the urgency of staying relevant in the market. The designer sees limited screen real estate, the need to engage the user quickly. The engineer sees maintenance costs and technical limitations. But you’re all working toward the same goal: ship products, grow the business, delight your customers.
The only way to do that is to communicate. These conversations often seem like a contest between one person’s viewpoint and another. But the best answer for your situation probably has elements of both! Instead of your diverse experiences getting in the way of a productive conversation, why not have them be the conversation?
Help other people understand the constraints you see, and you can work together to decide the best tradeoffs to make. Understand the constraints another person faces, and you can understand how to help them. Instead of negotiating, you can collaborate.
Next time you’re frustrated by an unrealistic deadline or an inflated estimate, instead of assuming that person just doesn’t get it - ask how things look from where they are. What does a product manager do every day? What does it mean to an engineer that the code is a mess? What was the designer trying to achieve with that colour change?
Instead of talking about quality, let’s talk about goals. We want to see whether customers engage with this new feature. We want to cut down how often our engineers get paged at 3am. We want the main features to be at most 3 clicks away. We want to communicate the value of the product in the first 5 minutes. Let’s talk about how our different goals interact. Maybe there’s a way to do it that gets everyone closer to their goals?
Because that’s the fun part. That’s why we work in teams - bringing our different strengths together, solving problems, to build something awesome.
So let’s have less talk about quality, and more talk about empathy. Go talk to someone. Learn what makes them tick. And make something cool together.
Update: This post is the first in a series. If you found it interesting or provocative, you may enjoy the subsequent posts:
Thanks to Paul Biggar, Peter van Hardenberg and Dorothy Li for reviewing a draft of this post.
Photo credits: Thomas Hawk ("quality" wall painting), saeru ("quality workmanship" sign, cropped from original), Jeff Eaton (Lego argument), Michael Summers (discussion with notebook), and the Anita Borg Institute (people collaborating around a laptop).
]]>I want to get a lot more people programming, and I think we need a better analogy to get there.
People compare programming to magic with the best of intentions. In a heartfelt and inspiring keynote at Strange Loop on the importance of sharing the joy that programming brings us, Carin Meier likened programming to dreams of flying. A Kickstarter project CodeSpells offers to let you “express yourself with magic”. Fred Brooks wrote that the programmer “builds castles in the air”.
As a way to inspire more people to learn to program, this seems to make sense. Our popular culture is full of magical powers we would all love to have access to: Harry Potter rides a broomstick and fights evil, Gandalf summons giant eagles to get him out of harm’s way, Wolverine can heal from any injury. (Of course, the “mutations” of the X-Men and “The Force” in Star Wars are magic by another name.) Why wouldn’t anyone want to learn something that gave them magical powers?
Here’s the problem: it’s almost always a special group of people who have the magic. Harry Potter learns early on that he is a wizard, and that most other people are not wizards. Gandalf never teaches Frodo any spells. The X-Men have powers because of their mutations - they were just born that way.
When we tell a non-programmer, who has watched Harry Potter and Lord of the Rings, that programming is like magic, he’s likely to respond: “that sounds great; I wish I could do it”. Most people already have little concept of what programming is like. Telling him programming is like magic confirms what he probably already suspected: programming is complicated and mysterious; it’s not for him.
Comparing programming to magic gives people some idea of why it’s fun, but no idea of what it is - or how to learn it. Claiming to be wizards, we are more like the Wizard of Oz - hiding behind a curtain of mystique, magnifying our achievements to impress others. If we want everyone to have access to this power, we need to pull back the curtain.
Programmers aren’t wizards. We’re just people, pulling levers to operate the machine. We learned to program the same way we learned to cook or ride a bike - by trying and failing, changing something and getting a different response, and enjoying it enough to keep going. We need to show what that process looks like, and how anyone with access to a computer can do it.
Programming isn’t magic. It’s building a machine that talks to another machine that moves data around, or makes pixels change colour, or makes a speaker cone vibrate. We need to show how to do those things, and how they add up to produce results that feel like magic.
That brings me back to the Strange Loop keynote. After Meier’s presentation, she was joined by two live-coding DJs, Sam Aaron and Jonathan Graham. While they built up a soundtrack from samples and loops, Meier set up a menagerie of robots, including a quadcopter, and wrote a quick program to control the robots. Watch the part at 40:30 where she gets the quadcopter dancing to the beat.
Let’s go over that again. Three programmers, with a live audience, made a flying robot dance to music they created. If that’s not a magical result, I don’t know what is.
But it wasn’t magic. Meier controlled the robots by sending a few simple commands to their motors. Aaron and Graham produced the music with a screenful of code, using the Overtone toolkit. Using a simplified version of Overtone, Aaron has successfully taught 12-year-olds to make music using code.
The kids in Aaron’s class learned how to play a sample, change the pitch to make a melody, loop it to make a backing track. They learned how to build up several tracks into an ensemble, and how to tweak the music until it sounded good. They learned all this in much less time than they would have needed to learn a real musical instrument, or at least produce any kind of pleasant sounds from it.
Of course, they were also learning programming: how to build up complex systems from simple units, how to debug by changing parameters and observing the change in output. But as far as they were concerned, they were making music.
So if we want the rest of the world to have magical powers too, maybe instead of comparing programming to magic, we should compare it to music. Programming is like a musical instrument. A beginner can play simple tunes, and as they grow more adept, they can add chords, harmonies, counterpoint. Not everyone can write a symphony. But we can all make robots dance.
Inspired by a conversation with @roseaboveit.
Lego photo credit: evaysucamara
This was originally posted on LinkedIn
]]>We in the software profession have done a terrible job of explaining to the public what it is that we do. Everyone has interacted with a teacher or a doctor. There are TV shows about lawyers, cops, even government officials. However warped our impression of their day-to-day, we can relate to these professions. TV depicts programmers as modern-day wizards, socially aloof, hacking into systems or bringing the new algorithm online just in time to stop the cyberterrorists — totally disconnected from people’s experience of the software they use every day. Software remains mysterious.
This isn’t just a problem for awkward “so, what do you do?” conversations at parties. I believe one reason why so many demographics are underrepresented in software is that unless you grew up with it, you’re unlikely to have the faintest idea what making software is actually like. Why would you strive — particularly against economic obstacles and systemic biases — to enter a profession you know nothing about?
Inspired by a friend who couldn’t see what was so hard about programming, Peter Welch wrote a hilarious, heartfelt and all-too-true rant about what writing software is like. His title, and answer: “Programming Sucks”. The whole, long post is enjoyable reading, but here’s a representative excerpt:
Imagine joining an engineering team […] for a bridge in a major metropolitan area. […] The bridge was designed as a suspension bridge, but nobody actually knew how to build a suspension bridge, so they got halfway through it and then just added extra support columns to keep the thing standing, but they left the suspension cables because they’re still sort of holding up parts of the bridge. Nobody knows which parts, but everybody’s pretty sure they’re important parts. […]
Would you drive across this bridge? No. If it somehow got built, everybody involved would be executed. Yet some version of this dynamic wrote every single program you have ever used, banking software, websites, and a ubiquitously used program that was supposed to protect information on the internet but didn’t.
Welch brilliantly describes the nuances and stresses of trying to cobble together something useful, based on a blueprint nobody bothered to draw, out of parts designed to do almost, but not exactly, what you need them to do.
Reading this, I immediately wanted to send it to every friend and family member to whom I’d failed to explain what it was I did all day. But I didn’t send it, because it only tells part of the story. Programming sucks, so why do it?
For me, it’s because programming is amazing.
Programming is like building structures out of Lego, but I never run out of Lego bricks, and if there’s no brick with the exact shape that I need, I can make that brick. I can take the structures I build and use them as bricks to build bigger, more ambitious structures. I can build tools out of bricks to help me build quicker. If I build a model city, or a crane for building model cities, I can offer them to millions of people to download and play with, in any part of the world.
Fred Brooks wrote in The Mythical Man-Month:
The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.
Yet the program construct, unlike the poet’s words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. It prints results, draws pictures, produces sounds, moves arms.
One of the most liberating things about writing software is that the tools you use for it are also software. Remember that Lego tool you could buy to help you pry bricks apart? Imagine if you could build that tool out of Lego bricks. We can use the skills we have for writing software to improve the tools we work with. We can write software that makes us better at writing software.
There is a dark side, as Welch entertainingly describes. There are deep, scary flaws in the tools and processes we use to build software. Everything is more difficult and arcane than it should be. We spend so much effort, after the software is “done”, fixing problems that should never have been possible to introduce in the first place. Sometimes it’s amazing that anything ever works. But somehow, it does, and so we have smartphones, and Angry Birds, and the Internet. Programming sucks, but look at where we are!
Programming gives us live video conversations with relatives around the world; a map of our own biology; widgets that monitor oil pipelines from the inside; spreadsheets that run entire businesses; games where you build cities, or pretend to be a goat.
Of course these are specialised fields, each with its own demands and disciplines, but they all start with writing apparent gibberish in a text editor. The reach and breadth of what you can do with gibberish is remarkable.
Programming sucks, but for me, that is cause for tremendous optimism. We can use programming to improve programming! We can reduce the complexity that programmers have to keep in their heads. We can make more visual, interactive, intuitive tools for understanding the behaviour of programs. We can make it easier to fix incorrect programs, and easier to write correct ones. Programming is only going to get easier, and more powerful, and more accessible.
If we’ve come this far — in the 60-odd years that programming has even been possible — while programming sucks, how far can we go when it doesn’t?
Brooks’ phrase “building castles in the air” was once used satirically, to mean chasing an impossible dream.
For me, that’s what programming is like.
Thanks to Joseph Chow, Lily Han, Conrad Irwin, Martin Kleppmann, Lee Mallabone and Kiran Prasad for reviewing a draft of this post. Their feedback improved this post immeasurably.
Lego photo credit: dunechaser
]]>