The Good & The Bad
TDD offers all these benefits and more. How we practice TDD affects which of the outcomes we get. Which outcomes we value and believe are offered affects how we choose to practice TDD.
One person, thinking that TDD is about delivering tests along with implementation because that protects against future regression, will be happy writing tests after. They will also see value in measuring code coverage and driving it up.
Another person, thinking that TDD is about design, will roll their eyes at code coverage: "my CC is usually high without trying, occasionally low and that's OK, and driving it higher won't help."
Another, who values TDD for the tiny pulse of satisfaction that comes with every new passing test, will focus on the tiniest red/green increments (possibly missing refactoring).
Yet another, who cares about refactoring most of all, wants comprehensive tests and minimal pinned implementation details.
I want all of the benefits and none of the drawbacks. So that shapes how I strive to practice TDD.
Thanks for broadening my perspective. I am one of the guilty ones who always points out that TDD is more a design paradigm than a testing paradigm.
On the teaching others TDD and not having it adopted, I had a team where most enthusiastically adopted except for one person. Then months later, that person said, "you know, i started using TDD, and i really love working that way now". I think they needed some time to themselves, which i agree. i frequently feel the pressure to pick something up quickly, instead of taking my time to understand and assimilate it.
Finally, TDD hasn't been "easy". it takes doing it and helping others do it (where have i heard that before?), failing, learning and improving. That's the "problem" with agile. I wanted someone to hand me the definitive best way to do everything, instead of this "empowered to learn by doing and improving" shizzle.
I would like to share one more outcome that I personally enjoy a lot and find extremely important: the FOCUS. TDD helps me to focus and reduce the possibility of "distraction" from what it matters. Not only from the technology perspective, but also from the business/user-needs perspective.
Thanks a lot for sharing all these reflections!! I will definitely use some in the future :-)
100%. It's also a "flow" technique. When you focus on design (both test and production code design), you disconnect your neocortex from the implementation problem, because you're thinking about those other problems. This causes the neocortex to then NOT block the hippocampus from giving you "aha" moments. So, your software development/design starts to flow in ways that were not previously possible.
Thank you for this post, and the most recent book, a great read. For a while I questioned if I really got TDD, as people around me kept saying that it was about design, not testing. Yet the property I utilized the most was testing and the regression suite - I try to write tests in a way that gives me almost total freedom in regard to how I later change the structure of the code. In my case, the most valuable and insightful (also joyful and fearless) design sessions usually happen towards the end of the feature development cycle, as the likelihood of one usecase that invalidates the previous design choices is much lower. So not sure if I can say I test drive the design, or I just test drive the development, so that I can design, or refactor, with a great feedback loop. I gues one could call it keeping-design-choices-open-driven-tests(-driven-development), but as I still follow the test-prod code-refactor cycle with different weights over time, I hope I am still allowed to call it TDD.
I have started to practice TDD. I became addicted, and now I just "can't back" (or Kent Beck?). This is the real problem.
I always start applications with a 100% test coverage requirement. The few teams have been on where we had this, things went dramatically better. of course people can have bad testing habits, but I've never seen coverage requirements make that worse. The idea of just allowing lines of code to exist in the code base where you haven't proven they can even execute without error is just bananas to me.
On the design benefit, TDD also applies pressure (in a good way) to think about coupling. When you want testable code, you really need to think about separating concerns, using techniques like dependency injection, etc. if not it becomes really hard to test.
A great article, as always. As an industry we seem so inclined to have these debates about exactly how many angels can dance on the head of a pin. I first used TDD seriously late in my career, and it was an epiphany in how I thought about software development.
This is perhaps a bit narrow, but I'm very curious about your thoughts in this area: one approach that seems to have worked for me to avoid structure-dependent tests is to focus on "behaviours" or "mini use-cases" for the class.
A lot of people seem to want to focus on testing methods (perhaps because of the very simple introductory examples of TDD?) and my experience is that for any class with internal state that evolves over time, this approach leads to fairly complex set up logic for each individual method test, which very often involves adjusting internal state directly, because the test is trying to fake the missing method calls that would do that directly.
Totally agreed about code coverage! Useful as an indicator, not a goal in itself.
Also, it doesn't tell much about other properties like structure insensitivity, mutation testing neither.
We have to figure out something else.
What about this:
Each test starts with a score of 0 (when the team writes a test, they add a comment on top : "// test score: 0")
While implementing the feature, the score doesn't change but once it's done, we start updating the score according to the following rules:
- "-1" on false positives: the test fails while it shouldn't (e.g. due to a structural change)
- "-10" on flakiness: the test fails "randomly"
- "+5" on true positives: the test fails due to a behavior change that you didn't mean to introduce
(this is an example of rules that one can customize)
From time to time, the teams goes through the tests with the highest and lowest scores to learn from them and maybe rewrite those with the lowest scores.
I am really curious about what you all think about this.
I love this post! Thanks Kent!
Good point concerning the feeling of slower development.
I like to call this the "Development Time Perception Bias" (which I represent like this https://twitter.com/yjaaidi/status/1338503836634320896)
The time spent switching from IDE to browser (or other interface) to manually test then switch back to the IDE and debug is so "exciting" (I want to see if it works! / I have to understand why it doesn't work) that developers don't see time passing and the 1 hour session feels like 5 minutes.
Also, the whole thing seem incompressible while TDD seems compressible (without realizing that it took 10 minutes to implement the feature because they took 5 minutes to write the test first and that without the test, it would have taken 30 minutes + higher maintenance cost which they don't see yet).
I wonder what is your take on this?
I wonder is there a way to measure quality of your unit tests, something like assertion coverage?
“My brain just doesn’t work that way.”
If they are being honest with this assertion, they can use TCR instead...