You don’t need Orthodox TDD to code well

Orthodox TDD: start with a skeleton and slowly dress it up with functionality.

There is plenty of opinions on how to do Test-Driven Development well. Today I want to discuss the drawbacks of the orthodox TDD school, let’s call it OTDD. Although as a development method it lacks a how-to bible like the Scrum Guide, some proponents defend their take with a religious level of conviction. To my understanding OTDD has the following characteristics:

  • You start by writing a failing test against a skeleton implementation. Then you dress it up by implementing the scenario, making the test pass. Test and production code evolve strictly in tandem, and you typically have a one-to-one relationship between a test suite and a source file, usually a class.
  • It follows then that all dependencies of a class under test must be set up using test doubles (spies, stubs or mocks). This applies not only to unmanaged, out-of-process dependencies like database, network and file handles, but also to any class in your own deliverable.

To apply this approach newly to an existing code body that is very tightly coupled can be a tremendous pain. Consider this — and again I’m not making this up.

To unit-test this the orthodox way is hellish. You have to intercept construction of the Person object, mock out the static call to enrichPerson and replay whatever it is that the enrichPerson does to the instance. Actually it’s rather irrelevant to the unit test what happens inside the void enrichPerson and sendMail methods. Besides intercepting and recording that they are invoked, it validates nothing of what they do to our poor Person object.

It’s clear that this kind of tight coupling and mutating of input parameters is terrible for unit testing. And once you adopt OTDD, any coding style that gets in its way must be bad. OTDD doesn’t just favour loose coupling, you need it in order to avoid a world of hurt. Sounds good but isn’t that rather turning the argument on its head? Tight coupling and void methods that mutate their input are a terrible idea regardless of their testability. It makes code difficult to read and reason about. If you are a prudent coder and there is good review discipline in the team you strive for clarity, not just for your colleagues but also for your future self. You shouldn’t need OTDD as the stern taskmaster to keep you from creating a sticky ball of spaghetti.

OTDD by definition gives you close to 100% coverage. I needn’t tell you that code coverage means absolutely nothing without proper assertions, but in many cases it’s plain silly to have everything covered by a unit test. Here’s another example:

And here is its OTDD buddy:

The Spring data framework creates handles to read and write entity classes with the Hibernate ORM framework. The PersonMapper handles the conversion of data classes from the database layer to the domain layer, possibly using a framework like Dozer or Mapstruct. There’s a huge amount of third-party code going on behind the scenes of this simple one-liner. There’s plenty of room for configuration errors of the frameworks but this already bulky unit test disregards them all. Mocking out Spring data and the mapper does not catch any interesting bugs. You need integration tests against a running database for that.

Tests take time to write well, and their wholesome effect on code quality is just that: a bonus. Their main motivation should be to validate business rules and prevent regressions. There is nothing in TDD that obliges you to strive for 100% coverage, much less having one test suite per class. You can still code and test in tandem, but your units are units of behaviour and functionality, not units of code. Concentrate on the part where the most pertinent business rules are implemented, not the plumbing around it.

Other metrics of your code quality beyond test coverages are easy to get and monitor with static code-analysis tooling. Especially cyclomatic complexity (the number of possible execution paths) is a great indicator of where you need to focus your testing efforts. Mutation testing is another one of my favourites: it applies tiny alterations (mutants) to the bytecode in-memory and checks if your tests fish them out. Here’s an older post of mine on the subject. But best of all: use your good sense. Solving things with code is always expensive, so use it with care.

I have been working in software since 1999, writing on whatever fascinates me about the craft, usually with a focus on testing and sensible agile practice.