Why 50% Test Coverage Seems More Painfull Than No Test Coverage

Recently I was on a project where a bunch of code had been written before we arrived. It was quite a struggle to get the application under test. After a number of months the team hit 50% and then we just stayed there. We had a hard time getting client developer buy-in on the push upward from 50%. I didn't really understand this attitude at first, but after talking with the devs, I realized that the tests were mostly a nuisance for them. They saw it like this:
"If I have to gut a few pages, as part of a changing requirement, now I also have to spend a day fixing the stupid tests. And the tests never really catch any bugs, so what was the point? All the tests are doing is slowing me down."
Since the coverage was low and many of the test writers were new to unit tests we didn't really have a lot of protection from bugs. But we also had a sizable suite to maintain. They were feeling all the pain of keeping a test suite running but seeing none of the benefits. Now, as a guy who'd been on projects with good test coverage I could see how the suite was making things better: First, I had a roughly 50% chance of being able to use the tests as documentation for any class I opened up. And second, the tests were catching some problems before they made it into the repository -- but not enough for the client devs to notice.

We never did convince them that writing tests allowed them to go faster in the long run and it caused me to wonder how many teams get to 30-60% coverage and give up because the benefits are too subtle and the pain is too much?


João Bosco said…
In my opinion, automated tests works much better when you develop your tests in the beginning of the project. That's because TDD actually helps you in the process of building software.

When you have to build automated tests for already done software, people start to complain about it. It happened in some projects in my company too.

Usually, the value of tests case will appear after a short period.

Of course, if the test is simple as

testGetXXX() {
assertEquals(abc, object.getXXX());

probably this test will never fail. Is most of the test are that simple?
Jason said…
I am currently in the position where, for the last 15 months, I have been in charge of bringning test coverage of an application from 0% test coverage to 70% test coverage. The team was extremely responsive and really grokked the value of the unit tests. The company is now moving operations and an entirely new development team is being put in place. I am quite interested in hearing how a new team deals with this situation.
William said…
I think it depends on which 50% you cover.

My strategy is to pick some small area of the system and get 100% coverage. And in that area, never relent. Then, gradually expand the area as you have capacity.

Eventually, developers will notice that working in the clean areas is much more pleasant than the messy ones.
Martin Cron said…
"And the tests never really catch any bugs, so what was the point?"

It's obvious to many people, but worth pointing out. The point of the tests are to prevent bugs, not find them.

Designing for testability makes your code better. But if they aren't designing for testability, and just testing the 50% of the code that's easy to write tests for (such as the get/set example posted earlier) then there is no point.
Anonymous said…
i think ->

testGetXXX() {
assertEquals(abc, object.getXXX());
its great example of how stupid test coverage is, i think test are important for hard challanges, but not for everything.
What I've found painful isn't having 50% test coverage, but having 50% buy-in from the team. I work at a startup where development speed trumps high test coverage and low bug counts... our testing pain arises when not everyone is on the same page regarding the role of tests...
Support said…
This comment has been removed by a blog administrator.

Popular posts from this blog

What's a Good Flog Score?

SICP Wasn’t Written for You

Point Inside a Polygon in Ruby