“Always write tests before writing any code.” “Unit test everything.” “100% code coverage.” You’ll hear all of these sentiments repeated like mantras, but is it really realistic? I’ll share my own practical, pragmatic way to approach writing tests. Here’s the gist: first, there is no one-size-fits-all approach. Second, tests are like armor: they protect you, but slow you down.
No one-size-fits-all approach
To motivate this, let’s go all the way back to first principles. Why bother writing tests at all? Because it reduces defects by automatically catching regressions. Why reduce defects? Because defects hurt the purpose for which the code was written, which might be a service, product, or other piece of functionality. Bugs and downtime lead to degraded service. Therefore, writing tests should be seen holistically as one line of defense amongst others to prevent bugs and downtime. Other defenses include continuous integration, manual QA, robust deployment, good monitoring, and fast incident response.
Zooming out like this is important because it forces us to consider the potential risks of shipping bad code. If you’re writing software that powers x-ray machines or critical infrastructure, a regression is really bad news. If you’re building a personal side project to organize photos, a regression is likely no big deal. Therefore, just like you would expect critical infrastructure to have many lines of defense against regressions, you’d expect a simple personal project to have few or no defenses, which probably means few or no tests. You can see that there is no one-size-fits-all-approach to writing tests, and that a good test-writing strategy should take the ambient context into account.
- What could go wrong if a regression is shipped? If the consequences are terrible, consider writing a test.
- Is it easy to cause regressions in this part of the code? If so, consider writing a test.
- If a regression is shipped, how long will it be before someone notices? If it will be a long time, consider writing a test.
- What other lines of defense are available that would assist in detecting regressions? If there are few in this area, consider writing a test.
Tests are like armor
Everything I just said can be rephrased as “consider the benefits of writing a test and why the test should be written.” The reason to do this at all is because tests have a cost. They protect you, but they slow you down, because you have to write, maintain, and run the tests. Therefore, because they have a benefit and a cost, it’s imperative to maximize the cost-benefit ratio when writing tests.
We already talked about benefits above, but the cost part is just as important. This is where attitudes like “unit test everything” and “100% code coverage” fall down even more, because they ignore the potentially huge costs of those policies. Tests are a part of the codebase and thus need to be maintained. Writing a test can be time-consuming relative to the feature being implemented. Running some kinds of tests, like end-to-end tests, can take a long time because they often need to spin up additional resources, and sometimes give ephemeral false failures (known as flaky tests, or test flapping).
That’s why it’s important to think about the cost-benefit ratio when choosing tests to write. You know you’ve made a good choice if you feel relieved after writing the test, or if it saves you time debugging or refactoring. On the contrary, if it feels like a chore, or if you’re spending way too much time testing relative to writing functionality, consider taking a step back and thinking about how to make testing easier by refactoring or reducing the scope of the test. Or, consider if the test is necessary at all.
I should also emphasize that building your codebase in such a way that it is easy to write unit tests will almost certainly have a fantastic impact on most medium-sized, non-toy projects. Not only will it have compounding effects for testability, it usually produces a cleaner codebase in general.
Let’s go over a few scenarios, where I’ll provide my opinion on what kind of tests are needed.
Tiny side project: You probably don’t need tests, unless you intrinsically like writing them. Or, if the side project is incredibly easy to unit-test (like a deterministic library function with no I/O), it is so easy that you might as well write some tests.
Large side project: Tests might be helpful if you’re starting to lose track of parts of the code base, or if regressions keep cropping up as you add more functionality. The benefit of preventing major regressions may exceed the cost of writing a few key tests for key flows.
Tiny startup looking for product-market fit: It depends on the space that you’re in, since different spaces have different impacts from regressions. But if you’re building a simple website or application, you probably don’t need tests at all. You could consider a few key end-to-end tests to make sure core functionality is not broken, and unit tests for particularly tricky functions, but nothing else. Your goal is to iterate quickly and tests are likely an unacceptable tax at this moment.
We were in this scenario at Penny. We wrote a few key end-to-end tests that covered major functionality, and some unit tests for particularly tricky functions, but nothing else. Combined, they caught a large chunk of our issues. We resolved issues that made it to production through our lightning-fast support response and two-minute commit-to-deploy pipeline.
Large company with mature product: At this point, the complexity of your codebase is so high that not testing a new feature is probably off the table, even with all the other defenses your company probably has. You should focus on making the codebase as testable as possible so that writing tests is easy. But, you should still not feel compelled to write a test for every trivial change or bug fix. Always think about the cost-benefit ratio.
Remember the two rules of thumb:
- There is no one-size-fits-all approach.
- Tests are like armor: they protect you, but slow you down.
Afternote: Typed languages vs. untyped languages
Should you write more tests if you’re working in an untyped language like
Yes! This follows from the question above: “Is it easy to cause regressions in
this part of the code?” It is trivially easy to cause regressions in untyped
code, so the benefit of writing tests is higher. On the flip side, types are
like free mini-tests embedded throughout your code, so in a typed language you
should pull back on testing “obvious” stuff. By obvious, I mean things like
passing a variable of the incorrect type or
undefined, because the compiler
will forbid it for you.