Is there a connection between testing and quality? A lot of people think that the cause of low-quality software is insufficient testing. The truth is very different:

Testing doesn’t raise quality!

Testing and quality are related, but the relationship is much looser than many think. Testing can show you the absence of quality by exposing the bugs in your software to the light of day. However, it can only prove the presence of bugs and not the inverse: Just because you didn’t find any bugs, doesn’t mean that there aren’t any left in the code. So, testing is a quality check and not even a very reliable one. Nevertheless, testing has an impact on quality if you find bugs and decide to fix them. Afterwards, your code will (probably) be in a better state than before. This is why so many people directly link testing and quality, but this is an oversimplification. Just as you don’t fill up your gas tank by checking how much is left, you don’t raise your software quality just by testing. Instead, you have to act and explicitly improve the software quality.

So, what does raise software quality? Among others: Fixing bugs, removing code smells, removing warnings, improving performance and making your software easier to use. As already mentioned, those activities might be the result of testing. It is very important to remember that somebody has to react to the test findings for the testing to have any effect at all. This may sound obvious, but often testing is done so late in the software development lifecycle that there simply isn’t enough time to address the findings. This means that the issues will either remain unfixed or will be handled in a later release. I like to call this kind of testing fig leaf testing. Its only purpose is to hide that you don’t have sufficient quality control.

With that in mind, what kind of testing should you do? First of all, you should always use automated tests over manual tests. Manual testing is a bad investment, as any time used is gone forever. Writing a good automated test will pay back in multiples in the future if you frequently change the code it covers. Manual testing should be limited to things a computer cannot check (e.g., usability testing). The absolute worst way to test is using guided manual tests. Here, so-called test cases are written: long documents which contain a list of steps the tester has to execute and what he needs to check during testing. “Executing” those test cases is absolutely soul crushing and highly error-prone as humans aren’t made for following dozens of steps to the letter hundreds of times a day. Don’t use test cases!

If you are currently not satisfied with your code quality, testing more is not the answer. You probably have more than enough quality issues in your backlog / bug tracker to keep you busy for a while. Instead, you need to figure out why your code quality became so bad in the first place. It probably was caused by a combination of time pressure, sloppy developers and a lack of focus on quality. Those need to be fixed first. Then, you can start slowly improving the quality of your code by writing more automated tests, adding static code checks, creating a decent continuous integration pipeline etc. It will take time, but with each passing month your code will become a little bit better. In general, it takes longer to clean up a codebase than it does to mess it up, but it is possible nevertheless. As you cannot drop everything and spend all of your time improving the code, you will have to do it in an iterative manner. Don’t be discouraged by the slow pace: It is a marathon, not a sprint.

To sum up: Testing does not raise your quality, as it can only prove the existence of bugs in your code. You will not improve your quality by testing more. Manual testing is a bad investment and should be minimized.