Note, John does confess that he himself has not tried TDD flow, hes only seen some examples of people/students who think they have tried TDD. If my implementation of a practice is flawed, it's not to be taken as a reflection of the practice itself. Regardless of whether or not one practices TDD, software design is matter of discipline. If I have a good discipline for design, I can do it with TDD as well (or in John's case despite TDD 😀)
I read the article you reference, and John quite obviously didn’t know what TDD was when he wrote that text. As Uncle Bob wrote, it was a very inaccurate description of TDD. To dis something and to have not bothered to learn it is unfortunate. That’s possibly an unfair characterization. We’re human and so he might have summarized it to the best of his understanding.
How can someone be intellectually honest and say, essentially, "I'm not a fan of it because it..." when they have not actually tried the thing? If these are accurate representations of his quotes, I've lost a decent amount of respect for someone I once saw as a living legend in our field.
"You claim that the problems I worry about with TDD simply don't happen in practice. Unfortunately I have heard contrary claims from senior developers that I trust. They complain about horrible code produced by TDD-based teams, and they believe that the problems were caused by TDD..."
And a little later
"You ask me to trust your extensive experience with TDD, and I admit that I have no personal experience with TDD. On the other hand, I have a lot of experience with tactical programming, and I know that it rarely ends well..."
So yes, his rather strong beliefs seem be formed on the word of mouth not direct experience and practice, but then, a lot of people bash TDD without trying it in earnest. That said, the rest of what John talks about in Philosophy of Software Design is very insightful and I don't think his ill-informed perspective on TDD makes me respect him any less.
"Yeah, I'm not a fan of TDD because I think it works against design."
One must use a single, consistent paradigm or technique for design. That feels like a faulty assumption to me.
It's similar to the notion that we need to use one programming paradigm.
I really like having different design tools to apply to different contexts. Some examples:
I love TDD for core domains, not so much for writing front-end code. In the later, I prefer (keyword) a more designer (the role) driven process. Test-centered, of course, but not strictly test-driven.
Or when I encounter a codebase that wasn't built with TDD, I like using the C4 model to capture the current state and its dynamic diagrams to plan extractions or large-scale refactorings before introducing behaviors, likely using TDD. So, top-down until I can get to a place where emergent design works.
I noticed this first in "The Good, the Hype, and the Ugly" and I have observed in every methodology discussion ever since: proponents of a method will say that it prevents you from making mistakes people commonly make (under the assumption that the developer will do everything right that they possibly can), wheras opponents will tell you that this is exactly the cause of the mistakes that people commonly make (based on the assumption that the developer will do everything wrong that they possibly can).
At the end of the day, mistakes will be made either way and neither side has the data to prove that their preferred approach leads to fewer or less costly mistakes.
I think if you pair it with BDD during the requirements definition phase and TDD during the coding phase you end up with the design in mind because you know how and why the end user needs it or may try to use it.
I watched parts of the interview. He spoke well but I found myself disagreeing with him on several of his points. Of course I’m biased; I’m subscribed to you Kent.
I couldn’t help thinking about your Cannon TDD article and wondering if he’s really talking about Cannon TDD.
Even if I had an upfront design, wouldn’t I want to start by considering what a list of tests for it might be like? Wouldn’t I want to build it incrementally, making sure it’s always very testable along the way? Wouldn’t I want to allow myself the possibility to undo a step or incorporate into my design an insight I’ve had from the building experience?
As far as code decomposition and method size, there was something about how he argued against small methods that felt off. I found myself thinking, obviously you shouldn’t decompose large methods into sub-parts with low cohesion, and tight couplings to one another. The same would be true in composing English prose.
Hi, Kent. My comment below turned out to be much longer than I had expected. Feels a little like developing software in that respect. Sorry about that. (Or maybe not.)
I haven't yet listened to the actual interview you link to, so I'm speaking with a certain amount of ignorance. But this bit in the quote from the interview left me nonplussed:
“I'm not a fan of TDD because I think it works against design… I think that should be the center of everything we do in development should be organized towards getting the best possible design. And I think TDD works against that because it encourages you to do a little tiny increment of design.”
I think I agree with your criticisms.
There are a couple of unanswered questions that I find especially confounding. Firstly, what is "the best possible design"? We've heard this kind of language our entire careers, but it still leaves me grasping for some sort of objective yardstick that I can use to measure "best." Like most things in life, "best" is an N-dimensional quantity. If you asked me which was the "hottest" pizza, I could use a thermometer, but if you ask me what the "best" one is, I need to include a long list of parameters, including the subjective opinion of the diner. Similarly, the best possible design, in my estimation, is the design that delivers value to the business, which usually means delivering working software to customers. And as you've pointed out, this includes future value and the value of options, not just present value.
Software is also complex, and it's pretty impossible (in most cases) to design an entire system up-front and get it right. You never know how your design will integrate with a complex reality until you actually see it in action. So again, the value of options. At some point, you have to do incremental design. So, at which point is the increment too "tiny" to be beneficial? That's the second unanswered question. Why should a single test-implement-refactor cycle necessarily be too small an increment? You mention three possible reasons it might be too small: local maxima, pressure to deliver, and blurred vision. But these factors exist at any granularity, not just with tiny increments, and we need strategies to address them in any case. Here are some of the strategies I've found useful:
* Local maxima: If you can't find a walkable path to the next maximum, jump or fly. Take a step back from the problem and tweak the design. Sometimes, this may require migrating to a new API or architecture, but in my experience, it doesn't happen often. Even if we do need to, once we identify the migration we want to perform, we can chart the steps we can take to get there. However, one thing that's missing from this discussion is the flip side of the coin: With small steps, you can actually approach a maximum. If your increment of design is too large, you'll likely end up near but not at a maximum. I can't count the number of times that I ended up with a genius-level design that I had no idea existed, because I could approach it in tiny steps.
* Pressure to release the next feature: In my experience, this is more of an excuse than an actual problem. (My experience, however, is probably not average.) There will always be pressure to deliver the next feature, and as professional programmers, it's our job to figure out how to best accomplish that. This pressure occurs regardless of whether you're using TDD, and design is part of the software-development process.
* Not able to see a more-desirable design: Again, this happens at any granularity. Take a step back from the problem and reevaluate. (We do this all the time in practice.) Use the process that you have that allows you to refine the design in order to try to seek out the most desirable (a.k.a., "best") design. The smaller the increment of design, the more likely you are to end up at a maximum.
We also tend to use multiple overlapping increments of design. At the level of implementation, we use TDD or something similar. Take a step back, and we're talking APIs. Step back further, and we see subsystems or microservices or user interfaces. All of these are in play simultaneously. I'm really stymied to understand why we would just exclude one of those scales of view and not the others. Yes, we don't want to lose the forest for the trees. But we need trees or else we don't have a forest. What's the problem with that?
Note, John does confess that he himself has not tried TDD flow, hes only seen some examples of people/students who think they have tried TDD. If my implementation of a practice is flawed, it's not to be taken as a reflection of the practice itself. Regardless of whether or not one practices TDD, software design is matter of discipline. If I have a good discipline for design, I can do it with TDD as well (or in John's case despite TDD 😀)
I read the article you reference, and John quite obviously didn’t know what TDD was when he wrote that text. As Uncle Bob wrote, it was a very inaccurate description of TDD. To dis something and to have not bothered to learn it is unfortunate. That’s possibly an unfair characterization. We’re human and so he might have summarized it to the best of his understanding.
How can someone be intellectually honest and say, essentially, "I'm not a fan of it because it..." when they have not actually tried the thing? If these are accurate representations of his quotes, I've lost a decent amount of respect for someone I once saw as a living legend in our field.
I misremember, my apologies, he didn't say that verbally in this podcast but here's his conversation with Bob Martin (https://github.com/johnousterhout/aposd-vs-clean-code?tab=readme-ov-file#test-driven-development) where John says, and I quote:
"You claim that the problems I worry about with TDD simply don't happen in practice. Unfortunately I have heard contrary claims from senior developers that I trust. They complain about horrible code produced by TDD-based teams, and they believe that the problems were caused by TDD..."
And a little later
"You ask me to trust your extensive experience with TDD, and I admit that I have no personal experience with TDD. On the other hand, I have a lot of experience with tactical programming, and I know that it rarely ends well..."
So yes, his rather strong beliefs seem be formed on the word of mouth not direct experience and practice, but then, a lot of people bash TDD without trying it in earnest. That said, the rest of what John talks about in Philosophy of Software Design is very insightful and I don't think his ill-informed perspective on TDD makes me respect him any less.
"Yeah, I'm not a fan of TDD because I think it works against design."
One must use a single, consistent paradigm or technique for design. That feels like a faulty assumption to me.
It's similar to the notion that we need to use one programming paradigm.
I really like having different design tools to apply to different contexts. Some examples:
I love TDD for core domains, not so much for writing front-end code. In the later, I prefer (keyword) a more designer (the role) driven process. Test-centered, of course, but not strictly test-driven.
Or when I encounter a codebase that wasn't built with TDD, I like using the C4 model to capture the current state and its dynamic diagrams to plan extractions or large-scale refactorings before introducing behaviors, likely using TDD. So, top-down until I can get to a place where emergent design works.
I noticed this first in "The Good, the Hype, and the Ugly" and I have observed in every methodology discussion ever since: proponents of a method will say that it prevents you from making mistakes people commonly make (under the assumption that the developer will do everything right that they possibly can), wheras opponents will tell you that this is exactly the cause of the mistakes that people commonly make (based on the assumption that the developer will do everything wrong that they possibly can).
At the end of the day, mistakes will be made either way and neither side has the data to prove that their preferred approach leads to fewer or less costly mistakes.
That said, I'm with TDD on this one ;-)
I think if you pair it with BDD during the requirements definition phase and TDD during the coding phase you end up with the design in mind because you know how and why the end user needs it or may try to use it.
Give that a try & let me know how it goes.
Whole article about TDD, not a single mention about what that stands for. Bad article writing. (At least once in parentheses)
"Bad article writing"--seems a bit harsh. I'm sure you could phrase it better if you tried. But feedback is a gift.
I watched parts of the interview. He spoke well but I found myself disagreeing with him on several of his points. Of course I’m biased; I’m subscribed to you Kent.
I couldn’t help thinking about your Cannon TDD article and wondering if he’s really talking about Cannon TDD.
Even if I had an upfront design, wouldn’t I want to start by considering what a list of tests for it might be like? Wouldn’t I want to build it incrementally, making sure it’s always very testable along the way? Wouldn’t I want to allow myself the possibility to undo a step or incorporate into my design an insight I’ve had from the building experience?
As far as code decomposition and method size, there was something about how he argued against small methods that felt off. I found myself thinking, obviously you shouldn’t decompose large methods into sub-parts with low cohesion, and tight couplings to one another. The same would be true in composing English prose.
Hi, Kent. My comment below turned out to be much longer than I had expected. Feels a little like developing software in that respect. Sorry about that. (Or maybe not.)
I haven't yet listened to the actual interview you link to, so I'm speaking with a certain amount of ignorance. But this bit in the quote from the interview left me nonplussed:
“I'm not a fan of TDD because I think it works against design… I think that should be the center of everything we do in development should be organized towards getting the best possible design. And I think TDD works against that because it encourages you to do a little tiny increment of design.”
I think I agree with your criticisms.
There are a couple of unanswered questions that I find especially confounding. Firstly, what is "the best possible design"? We've heard this kind of language our entire careers, but it still leaves me grasping for some sort of objective yardstick that I can use to measure "best." Like most things in life, "best" is an N-dimensional quantity. If you asked me which was the "hottest" pizza, I could use a thermometer, but if you ask me what the "best" one is, I need to include a long list of parameters, including the subjective opinion of the diner. Similarly, the best possible design, in my estimation, is the design that delivers value to the business, which usually means delivering working software to customers. And as you've pointed out, this includes future value and the value of options, not just present value.
Software is also complex, and it's pretty impossible (in most cases) to design an entire system up-front and get it right. You never know how your design will integrate with a complex reality until you actually see it in action. So again, the value of options. At some point, you have to do incremental design. So, at which point is the increment too "tiny" to be beneficial? That's the second unanswered question. Why should a single test-implement-refactor cycle necessarily be too small an increment? You mention three possible reasons it might be too small: local maxima, pressure to deliver, and blurred vision. But these factors exist at any granularity, not just with tiny increments, and we need strategies to address them in any case. Here are some of the strategies I've found useful:
* Local maxima: If you can't find a walkable path to the next maximum, jump or fly. Take a step back from the problem and tweak the design. Sometimes, this may require migrating to a new API or architecture, but in my experience, it doesn't happen often. Even if we do need to, once we identify the migration we want to perform, we can chart the steps we can take to get there. However, one thing that's missing from this discussion is the flip side of the coin: With small steps, you can actually approach a maximum. If your increment of design is too large, you'll likely end up near but not at a maximum. I can't count the number of times that I ended up with a genius-level design that I had no idea existed, because I could approach it in tiny steps.
* Pressure to release the next feature: In my experience, this is more of an excuse than an actual problem. (My experience, however, is probably not average.) There will always be pressure to deliver the next feature, and as professional programmers, it's our job to figure out how to best accomplish that. This pressure occurs regardless of whether you're using TDD, and design is part of the software-development process.
* Not able to see a more-desirable design: Again, this happens at any granularity. Take a step back from the problem and reevaluate. (We do this all the time in practice.) Use the process that you have that allows you to refine the design in order to try to seek out the most desirable (a.k.a., "best") design. The smaller the increment of design, the more likely you are to end up at a maximum.
We also tend to use multiple overlapping increments of design. At the level of implementation, we use TDD or something similar. Take a step back, and we're talking APIs. Step back further, and we see subsystems or microservices or user interfaces. All of these are in play simultaneously. I'm really stymied to understand why we would just exclude one of those scales of view and not the others. Yes, we don't want to lose the forest for the trees. But we need trees or else we don't have a forest. What's the problem with that?