Unit testing by Software Sauna
|

Unit testing and TDD misconceptions

This is a review of an article covering unit testing and TDD: https://www.simform.com/unit-testing-tdd/

I welcome any article on these techniques as they are used by far too few developers. However, I would like to point out some sections of this particular article which are misleading.

Of course, this is my own opinion. I have a modest amount of experience doing TDD (~2 years) and unit testing (~6 years). I am far from an expert in the field, but I think some parts of the article damage the reputation of TDD, which I think was not the author’s intent.

Time Consuming

I am almost positive the author meant unit testing without a TDD process because he mentions:

  • covering the system with tests to achieve high code coverage
  • encountering bugs in the production code and writing tests to reproduce them

These are characteristics of a non-test-driven process.

I agree that unit testing after implementing the production code is very time consuming. However, test-driving the production code is not.

Explanation

Let’s look at three testing approaches:

  1. Testless
  2. Test-After-Development (TAD)
  3. Test-Driven Development

We want one thing: getting the software to behave like we want.

In a Testless process we would simply develop the software (i.e. write the code) which behaves like we want and that’s it.

However, since coding is complicated, we write the behaviour together with the bugs, so we want to use tests to keep the bugs out of production code.

I created a simple diagram which shows the duration of implementing three features using all three of the above approaches:

For TAD I have explicitly enumerated all activities which need to be done. Redesign (refactor) is not optional since we usually cannot write the tests for a poorly designed unit of code, or the tests take very long to write.

The firestorm after doing things without tests is not a joke, nor an exaggeration. After a couple of “clean” features (“clean” here meaning – no time wasted on pointless tests for this simple thing I can do quickly) one soon enters firefighting mode. Immediately after the code hits production, bugs begin to manifest themselves. You try to fix them but they are hard to find. When you do find them you often see that you have to redesign a portion of the system to remove them effectively. Since that is dangerous (you might brake something else) you hack the bug out. This creates more code and worsens the design. In turn, the next bug will be even harder to remove.

Conclusion

Implementing code without tests is the fastest approach if you don’t count the later work of debugging and modifying design.

You can speed up TAD by not covering everything. Good luck with choosing what parts of code to leave untested.

TDD gives you both bug-free code and quality design which ensures that cranking out features doesn’t slow down. This makes TDD by far the fastest approach in software development.

“Trivial”

The article also mentions that developers should avoid writing unit tests for trivial code. Now, I’m not sure what’s “trivial”. If the code doesn’t have to work than I rather wouldn’t write it in the first place. This leads to somewhat more complex stuff about unit tests and acceptance tests which I am not going into here. In short, test-drive “complicated” code with unit tests. Complicated means code with logical branching (if statements). “Simple” code (i.e. declarative, function call orchestration) can be tested with high-level acceptance tests.

Best Practices

The article lists some “best practices” of unit testing. I agree with most but I listed a few here with which I have some problems with.

Test Classes should be placed in an appropriate directory

The single presented argument is that tests are executed and distributed in a manner different from that of the production code. I fail to see why this would be an argument to separate production code and its tests in different folders.

Most of my experience was in Java backend, mostly using Maven. Maven convention is just as the author describes: Production code in one folder, tests in another folder. The tests folder mirrors the structure of the production folder, so the test & production code locations are consistent. Java’s package-protected (default) visibility scope goes hand-in-hand with the test code being in the same package: Unit tests have access to package-protected classes, so you can test “inner” classes of a component package.

Some IDEs, notably JetBrains IDEs, offer switching between test and production modules (classes) and with a single keyboard shortcut. That’s nice!

However, I found that I often want to see at a glance whether a test for a class exists. If the class and its test are separated, opening the class and trying to navigate to its test is the fastest way this can be done. More broadly, I do not want my source code structure (everything from method order to folder structure) to be dictated by anyone except the developer working on the project. I may not want to adhere to the 1 unit test = 1 class rule. In such situations, the test may not be named after a class (e.g. MyClassTest).

In general, I do not want to use a framework that dictates what my root folders will be named. For that same reason I do not want to put tests in a separate folder just so the build tools can have an easy job packaging my application.

Instead, I would prefer to configure the build tools in such a way that test files are extracted from the source folder and put to one side and production files to the other, if that is neccessary. This can be done. We are developers, right?

It is bad practice to adapt your source code or your development process to tooling. It should be the other way around.

Define a standard naming convention for Test Classes

I completely agree that all test names should follow a common naming convention.

However, the proposed rules for a naming convention are too restrictive and superfluous:

Keep the source code class name same as its original name.

Maybe the author meant to say “package” instead of “name”; this is valid advice if you separate prod and test folders but I advise against it, see above.

Name of the testing class should be same as the original class but appended with the test.

See above, some tests may test two classes (e.g. what if I have a single test which verifies an implementation of a design pattern).

The class that organises tests should have TestSuite appended to it.

I don’t see a need for test suites. I separate my tests on fast and slow. The only slow tests are integration tests and are named consistently so that my test runner can differentiate between the two.

Specifically, Roy Osherove’s convention seems to me like cognitive overhead:

  • It is unreadable (as in – natural language readable).
  • The ruleset for the convention must be present all the time in all developer’s minds whether they are writing a test or reading it.
  • Enables the developer to assemble the name without thinking about the functionality.

Personally, I use English sentences as test names.

  • They read more easily.
  • To name a test, the developer must articulate what it is that the unit should do. This usually leads to improvements in the architecture or discovering unclear parts of the business domain.

Avoid multiple assertions in a single test

Absolutely agree, but the example is inappropriate.

I conclude from the example that it is a data-driven test, i.e. a single test which will be executed many times for many examples of inputs and outputs.

Implementing this in the “old-school” way using repetition of asserts is a valid approach in my opinion. The only thing I would improve upon is extracting the assert statement in a helper method which helps to single out the example data, making it more visible.

One more thing: About Adapters

The article states that unit tests are

Not feasible for database and network layer

I agree, up to a point. Adapter code should be as small as possible. I use the term “integration test” for any test testing only the adapter and its externality. If its a database adapter, I use the same type of database which will be used in production. If its a HTTP client, I use a HTTP simulator (typically MockServer).

The adapter-externality system is pretty isolated (if not as small as logical units) and its integration test could be viewed as a unit test of the adapter.

Similar Posts