And release features to production with a lot less stress and a lot less agony !!!
The animus – why does this matter?
Hello everybody!!!
Today’s post is motivated by scenarios that I’ve run into three times in my career so far across three different companies – bolstering unit test coverage to meet an enterprise coverage metric. The fact that it’s happened to me and other the other engineers I worked with at least three times intimates at its cruciality.
Corporations typically mandate that committed source code meets an expected percentage of unit test coverage before shipping to production ( e.g. 80% total coverage ). This is done to ensure that developers not only follow and abide by solid TDD practices, but that they also write sufficient tests ( lest bad habits accrue and test coverage precipitously drops to percentages such as 40% or 20% ). In a sense, the coverage metric acts as a “forcing function” – it ensures that industrial codebases remain healthy and up-to-date with solid engineering practices.
But there’s many times where coverages drop below expectations. This happens when engineers decide to meet their coverage metric exactly at the measure or only a few points above ( e.g. if coverage expectations are 80%, they meet around 80% – 84% ). Yep, meeting the bar technically passes and prevents stalls in the build and the release of artifacts in a DevOps pipeline, but the hack is short-term ; it’s prone to dropping back to a lower value. This can happen if a developer introduces drastic changes to source code or suddenly submits new files as part of a latest feature release.
But when do I care about test coverage drops?
Excellent question!!! Let’s suppose a feature needs to make its way to production within a Sprint’s two week scope, and your changes decreased test coverage from 82% to 70%. Due to a dearth and a lack of Github pre-commit checks ( or other safety features), your submission to build and release pipelines takes 15 minutes 1and fails to pass all stages due to failures at the intermediate coverage stages.
It looks like you have work cut out for you – you can expect a day ( or two ) getting that unit test coverage up again to meet expectations. And in the span of a dedicated sprint scope lasting 10 business days, this leaves 4/5ths of the remaining sprint dedicated to feature work. You’ll feel more stressed our and crunched for delivery. Could we have avoided the delivery stress and let you have all business days of a Sprint dedicated to feature testing and release?
What’s the better state?
The better ideal is having teams or organizations enforce stricter, more meaningful coverage levels ( e.g. 90% or 10% above coverage expectations ) all the time2. The drawback is that doing so entails more front-loaded efforts on developers ( which IMHO is a good thing ). But the benefits outweigh them – we circumnavigate the feelings of anxiety and frustration from deadline crunches when a feature suddenly needs to get productionized.
Now it’s an imperfect fix – we can still expect black swan scenarios where test coverages precipitously drops on major source code changes ( e.g. a massive codebase rehaul ). But we have done the hard, assiduous work lowering the probability of the black swans, meaning that running into the delays goes more unnoticed in one’s career. Thus resulting in a better experiences for the developers, the producer owners, and the leadership of a tech org.
Footnotes
- Ideally, we leverage pre-commit checks instead of a DevOps pipeline stage to catch the discrepancies earlier and save on developer cycles. Could we have spent 2 minute in a local environment instead of 15 minutes building and releasing massive artifacts in a remote environment? ↩︎
- There’re many methods to do so – examples encompass github pre-commit checks or introducing new pipeline stages ↩︎

Leave a comment