How much of your code is covered by automated tests? Chances are, the percentage is not as high as you would like it to be. I have worked in code-bases with a high coverage as well as in ones with barely any tests at all. I vastly prefer the former as a good automated test suite speeds up development and makes it much easier to determine whether the current state of the code is ready for production. The advantages of automated testing are well-known in the industry, but the practice has not yet reached the ubiquity one might expect given its advantages. In fact, from my experience, teams with a high number of automated tests are the exception rather than the norm. This problem is caused by the lack of three things:

Lack of Knowledge

It is hard to do something properly if you don’t know how. Many developers only have a limited understanding of automated tests. They may have some superficial training, but lack the necessary practical experience. Or they may have to deal with an outdated and badly kept code-base which makes testing difficult. Adding tests to legacy code is challenging and requires specific knowledge. Without it, the task may be daunting and this in turn will prevent developers from even attempting it. Fortunately, there is a least one good book about how to add tests to legacy code, so the situation is not hopeless.

Lack of Time

Time is always an issue in software development. No matter how careful the planning is, how agile the processes are and how good the code quality is, there always seems to be too little time to do things perfectly. Writing automated tests only pays back in the mid-term after some initial investment. Hence, there is a strong temptation to just skip it completely when time is pressing.

Lack of Interest

The most troubling lack. The benefits of automated tests are not accepted by every developer. Some think that automated tests are not worth the time it takes to create and maintain them. They feel hindered by the tests, especially by unit tests which can be quite brittle. This mind-set is especially common if developers rely on a quality assurance team to find bugs. Often this separation of work makes the developers think that testing is not their job. Naturally, developers will struggle to write automated tests if they don’t consider them helpful.

Dealing With the Three Lacks

So, where does this leave us? The lack of knowledge is the easiest to fix: There are good books and tutorials on automated testing, so any motivated developer can easily educate himself. If a development team as a whole lacks the necessary knowledge, it might make sense to do a team training. However, these trainings are no silver bullets. While they are quite good at sharing the core ideas behind automated testing, they often don’t focus enough on legacy code. It is one thing to write automated tests in a training on clean (and artificial) example code, but something else entirely to add automated tests to a legacy code-base. From my experience, a lot of developers are quite keen to write tests once they leave the training, but the daily struggle against legacy code quickly demotivates them. I think the best way to spread the knowledge in the team is by mentoring. If you have one or two developers, who are good at writing tests, they can assist their colleagues and thereby slowly spread the knowledge in the team. Naturally, the mentors need to have the time to do just that. This is a nice transition to dealing with the lack of time.

As already mentioned, a lack of time is ubiquitous in the software industry and writing automated tests is one of the things which often get lost in the fray. The best fix for this is test-driven development (TDD) as there is no way to ship a feature without a test if the feature doesn’t exist until a test is written. In general, I don’t believe that it matters when you write your tests, but if you face a lot of time-pressure, embracing TDD can be a way out. You should also make sure that the creation of automated tests is part of your testing / quality assurance strategy. While it can be beneficial to neglect automated tests in certain situations (e.g., when prototyping and in early versions of experimental products), it will become a problem in the mid-term. Adding tests to a legacy code-base is always more expensive than just writing them in parallel to the productive code.

Last, we have the lack of interest. Often, the motivation to write tests increases when developers have enough time to write them and are familiar with the practice. With a bit of luck, you will convince a lot of developers to write tests by addressing the first two lacks. However, you cannot count on it and some developers will still need convincing afterwards. From my observation, many resentments against automated tests stem from a misunderstanding: The main point of the automated tests is not to find bugs. The main point is to keep development speed up! The tests prevent you from (re)introducing bugs into the code when it is modified. Without them, development speed will slow down over time until it is barely a crawl as manual testing will take more and more time. Anyone who has ever worked in a large, messy code-base without any automated tests can tell you that it is no fun at all. Skipping the tests means that you are developing faster in the beginning of the project, but you will be much slower in the mid- and long-term. If this argument doesn’t sway the nay-sayers, you will just have to agree to disagree, unless you have some clout over them (e.g., quality standards). However, forcing unwilling developers to write tests will be a miserable experience for everybody involved.

That’s all I have to say on this topic. I hope that you now have a better understanding of the reasons behind your developer’s lack of enthusiasm for automated tests. If you liked this blog post, please share it with somebody. You can also follow me on Twitter/X.