Invest on tools

There’s a powerful quote from episode 231 (a tour of a rocket factory) of Smarter Every Day which I think about every now and then. You can watch this 90-second piece (the link is timestamped), ending at 9:52, to get the full context, but the quote follows:

It’s sort of an interesting thing, in the real world, how the engineering tools that are available dictate the kind of designs that we use.

Also interesting is this entry on Joe Armstrong’s blog, which ends with:

If you have the right tools it’s often quicker to implement something from scratch than going to all the trouble of downloading compiling and installing something that somebody else has written.

Or on a quicker form, this quote from Abraham Lincoln:

Give me six hours to chop down a tree and I will spend the first four sharpening the axe.

All of these illustrate and reinforce my opinion that tooling is the most important thing to focus on when approaching any sufficiently complicated project. It also means being able to build tools that will be applicable to your project, because they may not exist yet — the video from Smarter Every Day ends up mentioning how many of the tools at the rocket factory come from big brands, but are custom-made.

I think it’s fair to extend Conway’s law with this reasoning: the design of a system is also influenced by the tools available to it (which is essentially a rewording of the first quote). In software development, this gets particularly interesting, because organisations often have the means to produce their own tools, which implies that there’s a double application of Conway’s law here: the first when the organisation designs their tools, and the second when the organisation designs other systems with the tools that they previously created.

Besides understanding that new tools will most likely need to be created for any sufficiently complicated project, I also find it important to explore how a tool influences the success of a project.

Possibly the most straightforward way for a tool to influence a project’s success: to make a job better or faster. As an example, here’s a quote from a Hacker News comment :

[…] I was able to make it about 50 times faster on the dataset we were dealing with, which made using the tool change from “let’s go grab lunch while this finishes” to “let’s stream the logs through this live”.

This is a casual example, but it could be the basis for a project to be delivered on time. Back to that Smarter Every Day video, some of those tools meant that ULA was able to manufacture rockets more precisely, which increased the reliability of their rockets.

The key here is that there was already some job being done, possibly with some other tool, but the new tool improved how that job was done. A tool making a job better or faster isn’t the only way that it influences a project, though. This reasoning implies that a problem must exist first, and then a tool will be created or used to solve that problem. But the reverse also happens frequently, with new projects or use cases being possible because of tools that were created without that project in mind.

This can be extended from the HN comment linked before. Being able to process logs in a streaming fashion rather than in batches could unlock scenarios where logs could be processed in devices generating the logs, rather than later in a data warehouse. As a more concrete example, if the log processing means that data is “compressed” by a large factor, this could be the difference between being forced to sample the logs as they leave a device (e.g. due to bandwidth constraints), and being able to retain all relevant information related to logs, allowing precise debugging/drill-down.

An example I like a lot is that of programming languages enabling projects that would be impractical without them, and that entry on Joe Armstrong’s blog illustrates this: a few lines of Erlang code enabling a use case (file transfer) that was impractical otherwise. Programming languages provide either a runtime and/or comptime (to borrow from Zig’s terminology) that helps solve particular problems, and they also provide an abstraction to use them (usually specific syntax). By doing so, they enable a way to communicate solutions that makes it easier for humans (and maybe machines) to reason about, which means we can solve problems that wouldn’t be easily solved otherwise.

Original title: The power that new tooling unlocks

Go watch episode 231 of Smarter Every Day . It’s a tour of a rocket factory. The entire thing is very cool to watch, but if you’re in a hurry and just want to know what that video has anything to do with software development tools, I suggest watching this 90-second piece starting at 8:22 and ending at 9:52 (feel free to come back here after that timestamp). What Tory Bruno says at the end of this piece is very important: “It’s sort of an interesting thing, in the real world, how the engineering tools that are available dictate the kind of designs that we use”.

Ever since watching that video about 2 months ago, I still pause and think about this quote sometimes. It’s the kind of thing that sounds obvious, but you don’t really notice until someone else points out to you. I like to think of this as an extension to Conway’s law . The design of a system is not only influenced by an organization’s communication structure, but also by the tools available to it. If the organization is also the one creating its own tools, well… you’ll see a much stronger reflection of their communication structure on their designs.

Now, read this entry on Joe Armstrong’s old blog. You can take all sorts of things away from it, but I like to see it as an anecdote on the power that the right tools unlock. If you just want to transfer files between two machines and all you have are FTP servers full of features that require lots of configuration before doing anything, you’ll have to go through a lot of crap before achieving what you want.

Once again, this comment on Hacker News is another anecdote on the same thing. I’ll quote part of that comment: “…which made using the tool change from ‘let’s go grab lunch while this finishes’ to ‘let’s stream the logs through this live’”.

I believe the tools we use in software engineering have a much bigger impact on how we design software than most people think. And that’s why I think we should never be satisfied with the tools we have. There are two ways that I see new tools unlocking new potential:

Doing a job faster

A tool can do a job faster, perhaps a lot faster. If it does, it unlocks all sorts of cool use cases that people didn’t expect to be possible. The trick is that it’s really hard to think about these new use cases in advance, so you never know if it’s worth it investing time in making some tool faster. And even if you do make a tool faster, you also need access to experience and the right mindset to try out new things, to only then start to figure out these different scenarios that benefit from the faster tool.

This may sound more pessimistic than it’s intended to be. Sure, a faster tool will at least save you time on the task you need it to perform. But let’s briefly appeal to imagination here. Let’s take that 50x faster tool from the HN comment I linked earlier. Suppose that tool is used in a system from your company that pulls lots of logs from customer machines, aggregates them in some way, and shows it to the customer in some nice UI. Before the tool became 50x faster, your company’s product was only able to show aggregated log data on the next day — which means that a customer would only get any value out of the system 24 hours after the logs were generated. Now, with the faster tool, you can stream the results to the customer! That’s a lot of value there.

However, customer logs are somewhat sensitive. It’s hard for your company to earn the trust of customers because your product still requires access to the raw logs. Especially with all these privacy regulations, customers are more concerned about sending log data to third parties (well, that’s probably not true at all, but I’d like to believe that. We’re in imagination land here). But hey — you just made a tool 50x faster. You can now run this on your customer’s machines (and they’re fine with the small performance impact of this new tool), and only get the aggregated data out of it. Now your product becomes more appealing to a different set of customers, and you just got a lot more business for your company. This has a lot more impact.

We just explored two outcomes from having a much faster tool for the job. One of them is very straightforward, and gives you some benefits, but the other one has a lot more upside. The difference is that the later required access to more experience (in this case it was business experience, knowing what are the things keeping people from signing up for your product), and also the right mindset to try a completely different approach (“we’re a service business, but what if we tried running this tool on our customer’s machines?”). That’s what I meant when I said it’s really hard to figure out these new use cases just from making things faster.

We can leave imaginary land and look at something real for once: Cloud Gaming. There are a bunch of game streaming services now ( Google Stadia , GeForce Now , Xbox xCloud ), all made possible because, among many other advancements, a lot of things became faster in gaming (it wasn’t just hardware, although faster hardware certainly helps). Faster gaming by itself would just let me play games at higher frame rates on my computer. It took a mix of other things to unlock this new use case.

Creating a different paradigm

I like the example of programming languages here, because they’ve been part of every day of my life for over a decade now (just using them, I’m not that smart to create a new language yet). I’ll be boring and just reference Go and Rust . Both are relatively new languages, but they require you to take different approaches when writing code. In doing so, they simplify tasks that are way more complicated in other languages. In Go’s case, I like how its channel mechanisms allow one to write concurrent code effortlessly. In Rust’s case, the compile-time checks and guarantees simplify the entire software development cycle (you can eliminate a bunch of steps before shipping code because the compiler ensures some bad things won’t happen).

Those two languages are new tools that one can use when developing software, and (if used in the correct situations) simplify tasks that were more complicated/slower before. Generally, the use cases that they enable are straightforward, but the trick with them is that building the tools is a lot of work. It’s not as simple as “make something faster” (not trying diminish the effort needed to make things faster).

Always invest in tooling

Go back to that video snippet I shared at the beginning of this post. Recall how Tory mentions that the old pattern for the rocket barrel was designed in the 90’s. Looking at the wikipedia entry for Vulcan , I think it’s safe to say that it took around 15 years to get a new rocket barrel design. That doesn’t necessarily mean that it took that long for new tools to be available, but just think about this time interval. 15 years is about as long as I’ve been programming now. That’s a lot of time.

There’s still a lot of potential out there. Lots of new use cases we haven’t even thought about yet. Those will only be possible if you’re always investing in new tooling, and especially if you’re also looking at new tools that other people are building. One day, you might just find the right combination of a new tool, the experience and the right mindset to unlock cool new things for all of us.