Test-Driven Development

Posted on 01 May 2022 by Mirko Janssen 9 min

enter image description here

A few months ago, I had to read and busy myself with test-driven development (TDD) at work. We had some discussions about what makes good tests, what should be tested, and how to do test-driven development the right way. Therefore, I stumbled upon some interesting talks, discussions, and papers about that, which I wanted to share.

What Is Test-Driven Development?

Maybe I should start by shortly describing what test-driven development is. I don't want to go too much into detail because this will happen sooner or later in the following parts. The fundamental idea of TDD came from Kent Beck, who wrote about it in his book Test Driven Development: By Example for the first time.

Red, Green, Refactor - TDD

The main rules (if one could say so) are based on the allegory Red - Green - Refactor, which says: Before you start writing your code, you create a test that will fail (because no code could make this test pass). Only after then you do write the code that makes that test pass (and only that much code that is needed). Afterwards, you should improve and refactor your code to follow common software development principles and so on.

What Makes a Good Test?

As I mentioned at the beginning, there were discussions about what makes a good test and what a test is. Because over the years, more and more types, styles, and principles of testing came up, such as behaviour-driven development, for example. To answer the question What makes a good test? I personally think that we need to define the term test and then approach the answer cautiously by looking into some similar questions.

What Is a Test?

When we talk about TDD, then we are talking about tests, but which tests do we exactly mean? There are different kinds of tests: unit tests, integration tests, end-to-end tests, etc. To which one do we refer to when we speak about TDD or writing tests in TDD? The answer (in my humble opinion) is that it doesn't really matter. Not for the purpose we want to achieve by writing tests in TDD. The goals for these tests are the following:

  • We think about what we will implement — the use case, scenarios, input, outcome and output.
  • After adding new code or a feature to the software, we will run all our tests and check if the newly created test passes and if we didn't break something.
  • Getting some feedback very fast. While implementing we run our tests and or certain tests and get feedback pretty fast. This helps us while implementing because it becomes easier to see if we going to make mistakes or not. (This argument is stronger for TDD than it is for testing in general.)

Unit tests, integration tests, and end-to-end tests can fulfil these purposes. This means that it's the use cases or the reason why we are creating the test that tells us which kind of test we need to create. A funny thing that came up in this discussion was the question of when a test is a unit test or an integration test. For me, Martin Fowler stated this very well in his article On the Diverse And Fantastical Shapes of Testing, where he isn't only writing about the "Shapes of Testing" (e.g. the testing pyramid) but also about the different schools or approaches when it comes to mocking in tests. There I also want to recommend this article by Jim Newbery.

In most cases (this shows in my experience and maybe also yours), unit tests and integration tests are the way to go and end-to-end tests are more an optional thing. With this said, I would like to go a bit into detail here and talk about when to create tests or for what.

What Needs to Be Tested?

In the previous section, I wrote that end-to-end tests are more an optional thing. Did this kind of bother you? If yes, good! Because it's not that easy to say, if not even wrong, that's the reason why we also need to ask what needs to be tested. In some cases, we need these tests and they aren't really optional. In fact, there might be cases where we need more end-to-end tests than unit tests or integration tests. Anyway, it doesn't matter ... as Martin Fowler stated (or, to be more precise, Justin Searls) in his previously mentioned article, the type of a test or a certain percentage doesn't matter.

Testing is like a sieve

So one answer I like to give when asked this question is that one should see all tests of a specific type as a sieve and an occurring bug goes through several sieves with different granularity. Unit tests represent the first sieve a bug need to go through, and therefore the integration tests are another sieve with finer granularity. In contrast, the end-to-end tests represent the sieve with the finest granularity. So when asking if something needs to be tested, think about these sieves and the level of granularity that is currently ensuring that that issue/error/bug isn't occurring or at which level of granularity you want to ensure it.

To make this a bit clearer, let's go through some examples:

  • If you want to ensure that no invalid data is stored? Then validating this data in your value objects seems perfect.
  • If you want to ensure that your application shuts down correct when an error occurs? Then an integration test sounds most fitting to me.
  • If you want to ensure that a user can send no invalid data via a form? Then an end-to-end test should verify this.
  • If you want to ensure the correct data format, your frontend sends it to the backend? It sounds like a unit test for your data transfer object (DTO) or maybe an integration test.

There are two more things that I need to mention when we are talking about what to test. They are (maybe besides some others) also mentioned in the talk TDD, Where Did It All Go Wrong by Ian Cooper:

  • Test behaviour and focus on use cases
  • Test the public API (which goes along with the first bullet point)

In my opinion, this is the most important detail when it comes to writing tests in general. It doesn't matter if it's TDD or someone writing tests after implementing code. The test cases should always be based on the use case. They tell you why you are adding (or added) that piece of code. Business logic, behaviour, and use cases are the essential reasons why we as software developers write code. Therefore, it's the piece of information that should be easily grasped when looking into the tests and test cases. If one focuses on that important rule, the second-mentioned bullet point appears to be done by itself.

When to Use Test-Driven Development?

Another discussion that can be seen in other companies (maybe you talked about this with your colleagues too) is when to use TDD or if one should even use TDD. These questions or discussions often go along with statements like TDD takes too much time, how should I know what I'm testing if I didn't write the code first, or this is just a technic for newbies.

TDD should be a one tool of our tool box - not the only one

Statements like this are just wrong because they are way too subjective or the person using them didn't really understand TDD, in my opinion. Anyway, that doesn't mean that statements like we always need to use TDD it's the holy grail that solves all our problems are always correct, indeed I think they are also completely useless and wrong.

In my personal opinion, TDD works well when you need to handle uncertainties, especially when I remember Kent Beck saying that he came up with the idea of TDD because it helped him to focus on what he needed to implement. This can be seen here, which is part of the series Is TDD dead? that he did with Martin Fowler and David Heinemeier Hansson. For me, this is fundamental and helps understanding when to use TDD.

When you create some prototype or tracing bullets, it makes no sense to do TDD because whatever you develop will not be part of the final product. There it's just about the experience you want to gather. But when you are at a point beyond that and you are going to develop something that is meant to be used, then TDD has some good advantages. Thinking about what to implement (the behaviour you want to achieve) before you start implementing it helps to plan (and to reflect) the steps you want to go. This doesn't mean that it isn't possible to do without TDD, but it takes more experience as a software developer to be good at it. So doing TDD or not can lead to the same results, but it's probably easier for most of us with TDD.

To be honest (and I guess I'm going to hell to say that :D), I usually have to discuss and think about the use cases a lot before I start implementing them, so I typically write my tests after I have implemented a domain (or something). But that isn't something I would generally recommend.

Lessons Learned

TDD is often beneficial, but it's not some holy grail, so people that don't follow TDD should be burned at the stake (which would include me from time to time :D). The most important rule for testing, in general, is to focus on the overall behaviour instead of simply testing the in- and output. When there are uncertainties it's absolutely right to use TDD because that's the main point of using TDD, in my opinion. There are also different perspectives on how to mock or how much to mock when creating tests but even this doesn't really should have an impact on writing tests.

I really recommend reading the article and seeing the talks that I referenced in this post!