Test metrics give false security. Focus on your dangerous code first.

TL;DR
Pushing for test coverage is a bad incentive. It favors quantity over quality. You should prioritize your code with regard to test-worthiness. I discuss three such categories. First there is code that processes unmanaged and possibly malicious (user)input. You should test this for robustness. Secondly there is complex business logic with possible implementation errors. Test this for correctness. Thirdly there are quirks with trusted frameworks running code that doesn’t perform at scale. Run performance tests under realistic production loads to capture these.

If you like to get serious with test automation there are lots of great tools to pack in your kit. Every language has their preferred frameworks and plugins for writing tests, running them and gathering metrics into fancy reports. Slick videos can give non-techies the impression that test automation is all about picking the right tools and they will test our code for us. Unfortunately, the tools only take care of the execution stage. We still need you to write the tests, but at least they can run on cheap CPU time instead of expensive people time.

Test tooling is great at churning out stats. The most popular metrics are coverage on file, method, line and branch level, in ascending order of usefulness. We also love to see trends develop over time — as long as they go up. Yet in your hearts we know that test coverage alone is not enough. It really only tells you which production code was touched by some test code and that it encountered no runtime errors. It’s not a good indicator of code fitness, adaptability, or any of the other *-ilities. Without assertions the behavior of the production code is not even validated. Without proper organization and meaningful names, the body of tests becomes painful to manage and loses its greatest benefit, which is to stand in as (part of) the specification.

Quantity over quality

There’s no reason per se why driving for high coverage should result in poor quality test code. However, the perverse incentive to do is real. If it’s team policy to have a unit test for every line of production code to keep coverage high, you are practicing what I would call orthodox TDD. Every line of code looks equally important from a testing standpoint, but that is not reality. I use every door in my house, but the bedroom doors don’t need a lock. Likewise, each line of code is necessary for the software to function, but not every line is a potential pitfall that you need to carefully safeguard with your tests. Good testers know this. They zone in on the hard stuff, the parts of the application that can cause havoc when you feed it unpredictable or malicious input. They know the parts that are test-worthy, for lack of a better word. How effective is this test in preventing regressions? What’s the risk of not having this test? What could conceivably go wrong? What’s the potential damage we’re trying to prevent? The answers to these questions will tell you how thoroughly to test, if at all. This is an intuitive exercise, so do not try to put a number to it.

It took some paragraphs to hopefully win you over to my point of view. Here’s three categories of code that are highly test-worthy in my not so humble opinion. It’s the kind of code that goes bump in the night and guarantees you a trip to hell and back when left untested. The price of peace is vigilance and JUnit.

One: code that processes unmanaged input

Two: Complex business rules

Three: Abstractions that don’t scale well

Errors like the above are not easy to prevent. You can’t find them with a unit test. Even an integration test that uses the database won’t spot them unless you simulate a large enough load. Then again you would only write such a test if you already were aware of the danger, in which case you would not have implemented eager loading to begin with. Your best defense is having someone in the team who knows the platform pitfalls intimately and always run performance tests under a realistic load.

Conclusion

I have been working in software since 1999, writing on whatever fascinates me about the craft, usually with a focus on testing and sensible agile practice.