I know code coverage of 100% doesn't mean that the application is without bugs. I also know that code with zero unit tests can work just fine. I do, however, firmly believe
that having automated tests is a must when writing complex software
You can increase your code coverage several ways:
- Writing unit tests to cover code that isn't being tested.
- Delete un-used classes and functions.
- Reducing the number of uncovered lines of code in tested methods by means of refactoring.
There are others I'm sure, but that's all I can think of off the top of my head (and while I'm typing this up). Of the many ways you can reduce your coverage (writing new, untested code being the most obvious), another way just came to me that may be common sense to others, but it just popped in my head.
I could have a class with 100 sequence points, all covered. I could also have an application with 2570 of 3212 sequence points covered. This would give me just over 80% code coverage. Not bad, but what happens when I refactor that class of 100 sequence points into one that is still covered 100%, but now contains only 30 sequence points?
My coverage drops to under 80% code coverage:
2570 / 3212 = 80.0125%
2500 / 3142 = 79.567%
Sure, it's less than one half percent, but it's still a drop. My application is likely better and easier to maintain (less code), but my coverage numbers have gone down. This quick realization blip in my head just reminded of yet another reason code coverage isn't a good measure of a project.
You may ask: How would you even notice something like this happening?
Well, you can fail a build in a continuous integration environment if coverage drops below a configured threshold.