Order of tests: I recently did again the 'wardrobe kata' (https://kata-log.rocks/configure-wardrobe-kata) and there I found that there are two dimensions one can go in TDD-ing it: number of elements needed to fill the wall and number of different elements available. Both lead to completely different implementations for me.
Agree. I usually find that every problem has multiple "dimensions" that creates a "vector space". Each of those dimensions have their own ZOM(BIE) set of points and I try to cover a select number of such points on each of the vectors. Usually selecting the next test that is just one step in a single dimension at the time. This makes me only solve small steps at a time.
Yes, absolutely. I usually try to explore one dimension as long as there is a need in Katas - and in the real thing, I limit myself to two or three steps.
And if I have more than two dimensions, I will try to fix one dimension and extract it towards another area of the design (reducing spaces to planes, which are inherently easier to control). If that is not possible, I want a QE next to me to help me figure out test cases 🙈
It's especially great if you can a) find the one dimension that is "just a loop" or b) have completely distinct solutions for different points in that dimension.
I find the discussion alone can generate deep insight.
If you're refactoring, you don't need to change the tests in order to improve how predictive (from testdesiderata.com) the test suite is. However, extracting smaller elements may enable you to write smaller tests.
In some people's "TDD training" videos, I see them start with a few cycles of test and code. But when they've "tested in" a basic skeletal framework of methods, they stop writing tests and just "refactor in" their full intended implementation of those methods. That is, they just type in a fair amount of new code and call it "refactoring."
Some people do not seem to understand the differences between refactoring and writing new code. Yes, refactoring may introduce interfaces, extract methods, even extract code to new classes. And yes, that can add a fair number of source code lines, even new files. But it is not *writing new code*. Refactoring should not be *adding new functionality*.
To add most if statements, loops, expressions, and additional lines of code, one should incrementally add tests that fail due to the absence of those things in the code under test.
I teach and practice TDD. I’d not been consistent about using a list of test scenarios. After having read this article, I’ve changed how I do TDD.
I did a TDD of Fibonacci demo last week in front of a class. I think starting with a thought like “How many tests do you think we’re going to write and why” was a noticeable improvement especially for a problem like that where the input can be so easily partitioned.
I'm really curious as to how many tests people think it takes to implement a Fibonacci function. And how many does it turn out to be, when you do it, with TDD?
I would think that the "ZOM" part of "ZOMBIES" (Zero, One, Many, Boundary, Interface, Exceptions, Simple) would suggest *three*.
When I'm in a mood to be difficult, which I often am, I can drag it out to seven before I have to admit that I should stop playing games with "simple but incorrect" implementations, and actually just do it.
Interestingly, I don’t remember how many we used or decided. I am not certain, but I think it’s three. The thing I get excited about is the thought process you just did with ZOMBIES.
I remember watching Robert Martin demonstrating the bowling score TDD kata. being surprised at how few tests. He doesn’t try to analyze/list tests in advance.He likes the surprise factor.
However when I do it with the students we do Kent’s method of listing some tests in advance. We try to partition the example space. (Very lightweight and informal). I find that fun because we guess at the tests first and then explore our guess.
I'm not against listing planned tests, if that's what a person wants to do. And I think it can be helpful in the middle of TDD to make a list, if thinking about future tests gets to be distracting. But I'm almost never writing tests from a list.
I get my "seven test minimum" for Fibonacci by using a technique / game I call "evil coder" -- I don't just write the minimum code necessary to pass the test. Whenever "possible," I write a *known incorrect implementation* that passes all the tests.
I've long thought that one of the main reasons to write tests was to protect from some future maintainer accidently "breaking" the implementation. So if I can think of any "easy" way to get the implementation *WRONG* that would not be caught by the tests, I do it. And I insist that to get the correct implementation, we need more tests.
(I do impose a rule that the "evil coder" cannot do an implementation *more complex* than the expected or known correct implementation. Otherwise, we'd never get anywhere. There has to be a limit on how bad the "bad implementation" can be.)
Starting with that Fibonacci of zero is zero, these are my successive implementations, leading up to the 7th test "forcing" me to do a more reasonable implementation:
return -1; // always fail
return 0; // f(0) works. f(1) fails
return index; // f(0) & f(1) work. f(2) fails.
if (index < 2) return index; else return index-1; // f(0) to f(4) work. f(5) fails.
if (index < 2) return index; else if (index < 5) return index-1; else return index; // f(0) to f(5) work. f(6) fails.
if (index < 2 || index > 4) return index; else return index-1; // f(0) to f(5) work. f(6) fails.
if (index == 0) return 0; else if (index == 1) return 1; else return fibonacci(index-1) + fibonacci(index-2); // all tests pass
return (index <= 1) ? index : fibonacci(index-1) + fibonacci(index-2); // all tests still pass
there are simple relations between the input and output until "f(6) = 8" where I can no longer get the desired output with a simple change to the input value.
(Now as for the simplistically brain-dead recursive implementation being horrifically inefficient for non-trivial index values, ... Well, there are ways we could test an O(N) implementation into existence.)
I 100% had to correct someone on their idea of TDD this week.
"I'm almost done writing the unit tests" and "I'm doing TDD" were two sentences uttered in the same context.
I asked..."ah, so you're almost done with this story then?" But no. I suspected as much.
Gave the benefit of the doubt and exhoed your sentiment "if you're writing all the tests first, you're not doing TDD. You're doign something else...maybe you've found that it works for you and that's fine...but it's not TDD."
I care that the job gets done, that we have a good outcome, well organized code, and well written unit tests to support future improvements to the code. At least the tests weren't an afterthought.
No actual criticism of the post, just noticed this:
> surprising experience is that folks out there don’t agree on the definition of TDD. I made it as clear as possible in my book. I thought it was clear. Nope. My bad.
Sadly, just because you came up with a term and defined it with utmost care, means nothing if the term becomes popular. People take it and run with it, rewriting the definition in their head based on vague understanding (or even more often, based on the name alone).
Love the clarity, Kent! Your breakdown of the TDD workflow is gold. Especially appreciate the emphasis on a clear test list—so often overlooked. 👏 Also, Vic Wu's flowchart adds a visual punch to your wisdom!
Thanks, as a technical coach, I do teach the idea of breaking requirements into possible tests.
In practice, I tend to create new tests based on the implications of the previous test.
I do feel like if the next test is "hard", as in "I can't see the simplest way to make it pass", and I can't see a refactoring to make it easier, then I usually look for an easier intermediate test, that will allow the algorithm to change enough to get the next test unblocked.
About this: "Mistake: copying actual, computed values & pasting them into the expected values of the test. That defeats double checking, which creates much of the validation value of TDD.".
I couldn't understand. Is this about pasting value in the code of the test, or into the code of the system under test? If it is the former, I fail to see the problem.
And think hmmm, what is the expected value? Oh, never mind, I'll just copy the value, so I finish:
assertEquals(complicatedFunction(), null)
Then I run it & get an error message:
null does not equal 14.75
Then I go back and write:
assertEquals(complicatedFunction(), 14.75)
I claim this is a mistake, or at least a missed opportunity. The programmer has missed out on the chance to think through the computation in two different ways. Double checking reduces the chance of errors but it takes time.
Oh, ok. I agree with you. I assumed the tested code was not written yet, and I was calculating the value manually, in a calculator, spreadsheet, etc. and copying that to the test. Now what you explained makes total sense. Thanks!
I'd like to add an additional mistake at step 2. Mistake : computing asserted values using logic that will then be used to make the test pass. I prefer to hard code values where possible. E.g if order line 1 is $2 and order line 2 is $3 then expected order total is $5.
In some cases, I do computed values instead of constants. But I insist that it must be done with a different, and hopefully simpler, algorithm.
For example, to test the ACM algorithms to convert between Julian dates (days since January 1st, 1970) and Gregorian dates (month, day, year), I tested some known date value pairs first, with constants, and then I implemented a "What's tomorrow?" algorithm. I know that January 1st, 1970 is zero. Add one to the day of month, convert to Julian and get one. And I know how many days are in each month. And how many months in a year. And how to determine if a year is a leap year; February has an extra day then. So, day by day, I can compute out both Gregorian and Julian values as far as I want. And then I use them to test the direct conversions back and forth using the ACM formulaic algorithms.
Why would I do this?
Well, how do I know that the ACM formulas don't have overflow problems in my implementation language?
(Well yes, I could do some moderately intense numerical analysis. But that's a bother.)
> Folks seem to have missed this step in the book.
It's me. Normally I just launches into coding.
It seems work for me to avoid over thinking at the beginning. I always have a chance to add new test cases to the list at step 3. I maintain the list in mind, some time a TODO comments.
Maybe write the list is helpful for others, I did see some guys miss the scenario after TDD through happy path. On the other hand people may think too much implementation details when write the test case list, and may code too much when pass the current test case since she cannot forget the cases in the list.
> The initial step in TDD, given a system & a desired change in behavior, is to list all the expected variants in the new behavior. “There’s the basic case & then what if this service times out & what if the key isn’t in the database yet &…”
> Folks seem to have missed this step in the book. “TDD just launches into coding 🚀. You’ll never know when you’re done.” Nope
Huh, I have the book and read it a long while ago, and yeah, this is certainly a step I've been skipping. I've certainly had decent results without it, but I trust you, so I'll give this a try.
I think for me, I personally get into day-dreaming mode. If I sit still I day-dream (sometimes it's about the billions of dollars I'll eventually have, sometimes it's about how I make this beautiful piece of software that everybody loves etc. etc.). So figuring out how to get to coding as quickly as possible gets me out of day dreaming mode.
(But I'm skimming through the book right now, and I don't see this mentioned. You do seem to be jumping straight into the tests. I'm guessing you're referring to the TDD by Example book?)
I also find slowing myself down and considering the list of behaviours the hardest part. I always itch to start writing code as soon as I've determined the first handful of behaviours. In my case, that is partly driven also by a concern that if I spend too long thinking about different behaviours I'm going to start doing a full design in my head or on paper before I start on a test. Maintaining the discipline of not trying to come up with a design to cover all behaviours up front is tricky for me.
The most common complaint I've received about TDD is that it leads to bad quality software because there's no attempt to construct well-designed code. The second is that it leads to poorly tested code - there's a belief that people who practice TDD don't bother to think about testing a wide range of behaviour or considering possible edge cases. In both cases it has invariably turned out that the naysayers are not talking about TDD, but about a subset that includes only one or two of the steps.
I’m not sure that I understand this. Are you just saying to not write the code first but just its invocation in the test assertion as the first step? This will ensure that you start with a failing test.
I usually write the code’s method as a stub with no implementation, then poke the “Jump to test” which creates a blank test case for me. But I can see if you’re tempted to start to fill in the implementation then the other is better.
It's more than that. When I'm writing a test, I can get stuck on all the interface design decisions. I may know that the answer should be 4, but not know how to express the inputs. So following the "known to unknown" principle, I write:
assertEquals(actual, expected); // actually I can write this before I even know the answer is 4
expected = 4
assertEquals(actual, expected)
Then figure out how to express the inputs. Does that help?
In Sarah's (my partner) work as a math tutor, she used this trick quite frequently with students (usually teenagers) who felt stuck. She asked them a simple question: "What do you (already) know how to do?"
For the programmer trying to articulate the next example, that might be "Well, I know that I need to check an answer, so assertEquals(expected, actual)." Programmers routinely underestimate the confidence or hope that can give themselves by writing something like that down, even when they expect to erase it in 5 seconds.
Yes. Another example of starting with the simplest thing that could possibly work (or in this case fail), then building from there in small measured steps.
Try to start with the test and poke the 'create method'-button. You'll see that writing the method/function call alone will give you lots of design-insight.
> The initial step in TDD, given a system & a desired change in behavior, is to list all the expected variants in the new behavior.
This is a good initial step not just for TDD but for any "situation" where an "outcome" is desired (good luck to anyone trying to achieve a specific outcome if it's not listed anywhere or defined very well)
Here is an issue it's all fun and rosy to talk in the abstract about system and desired change but I think it's necessary (but not sufficient; as there is still a lot of things needed to be done to get that desired behavior) to have a good grasp on the system to understand the complex interaction between completely separate parts.
A challenge is I can't provide a simple example to illustrate the above point (let alone prove the point) because by definition such a system I'm talking about is a complex one.
The system would have the classic issues with inherent complexity with its four horsemen of complexity, conformity, changeability, and invisibility (see Fred Brook No Silver bullet Essence and Accident in Software Engineering for more details).
I think in principle and in the abstract TDD can and is probably a great workflow, but the context is an important piece to consider to determine if TDD is a good workflow to use in the context or not.
For example, I claim TDD is more challenging workflow to follow in a new project when the system doesn't exist and isn't well defined.
Where rapid prototyping would probably be more applicable in such a context until the system becomes more defined.
Here is a even worse situation where TDD is very challenging to follow, there is an existing system and the system isn't well defined (multiple parties using the system differently and the system doesn't have validation to coordinate between those parties)
It goes back to the idea use the right tool for the job.
Just to clarify and make my point clear.
What I'm trying to say and want to emphasis at the end of day a specific problem is a problem and has it's own context.
As much as we can abstract and think of patterns and utilize workflow such as TDD, that initial first step of understanding the problem (to a certain extent I argue it's impossible to 100% understand certain problem *initially*, I can expand on this if interested)
One last thing, another purpose of my comment is to highlight this idea (this will be very abstract so hopefully it's clear)
An answer to a question might be not wrong **but** if the question is open ended and **adding** details to the question might in fact invalidate that original answer.
I can expand on this if interested.
Thanks for reading this rambling.
As you said it will come down to taking responsibility of one's own work.
> Take responsibility for the quality of your work however you choose, as long as you actually take responsibility.
The below is an Aside probably not worth your time to read as it's very much half baked thoughts but thought I would include it for the heck of it.
Some junk I wrote about how I see it to get desired outcome and involves the following formula
Net Desired Outcome Result = Expected Benefit of Following Defined Requirement * Probability of Achieving Such Requirement - Cost of Achieving Defined Requirement
Expected Benefit of Following Defined Requirement = Short Term Value of The requirement * Short Time Period + Long Term Value of The Requirement * Long Term Period of Time
Unfortunately I don't think I can provide a formula for "Value of The requirement"
Cost of Achieving Requirement = Sum of the cost of An individual with the following (Experience + Skill + Time + Domain Knowledge)
I want to go into more details on the above (and probably restructure it a bit differently but I'll leave it at that for now).
I draw a flowchart from the article. Hope you like it.
https://whimsical.com/cannon-tdd-M74C15bNBdVmxhkLztnSXa
"end" needs to go back to "start" as we will need to change system all the time :)
Good piece, thanks.
Order of tests: I recently did again the 'wardrobe kata' (https://kata-log.rocks/configure-wardrobe-kata) and there I found that there are two dimensions one can go in TDD-ing it: number of elements needed to fill the wall and number of different elements available. Both lead to completely different implementations for me.
There is also an uncle bob blog entry (in a weird comic style) where he explores order of tests and how it impacts code (once the resulting algorithm is bubble sort, once it's quick sort): https://blog.cleancoder.com/uncle-bob/2013/05/27/TransformationPriorityAndSorting.html
Very, very, interesting topic.
Agree. I usually find that every problem has multiple "dimensions" that creates a "vector space". Each of those dimensions have their own ZOM(BIE) set of points and I try to cover a select number of such points on each of the vectors. Usually selecting the next test that is just one step in a single dimension at the time. This makes me only solve small steps at a time.
Yes, absolutely. I usually try to explore one dimension as long as there is a need in Katas - and in the real thing, I limit myself to two or three steps.
And if I have more than two dimensions, I will try to fix one dimension and extract it towards another area of the design (reducing spaces to planes, which are inherently easier to control). If that is not possible, I want a QE next to me to help me figure out test cases 🙈
I haven't thought about the dimensions as triggers for a possible refactoring move, but I can certainly see that. Good point!
It's especially great if you can a) find the one dimension that is "just a loop" or b) have completely distinct solutions for different points in that dimension.
I find the discussion alone can generate deep insight.
Additional Mistake that I've seen some people do:
"Refactoring" all the other code needed for the desired end result without adding tests for any of it.
IE: Pretty much giving up on TDD, and just writing code, ignoring the whole "testing" thing.
If you're refactoring, you don't need to change the tests in order to improve how predictive (from testdesiderata.com) the test suite is. However, extracting smaller elements may enable you to write smaller tests.
Yes.
In some people's "TDD training" videos, I see them start with a few cycles of test and code. But when they've "tested in" a basic skeletal framework of methods, they stop writing tests and just "refactor in" their full intended implementation of those methods. That is, they just type in a fair amount of new code and call it "refactoring."
Some people do not seem to understand the differences between refactoring and writing new code. Yes, refactoring may introduce interfaces, extract methods, even extract code to new classes. And yes, that can add a fair number of source code lines, even new files. But it is not *writing new code*. Refactoring should not be *adding new functionality*.
To add most if statements, loops, expressions, and additional lines of code, one should incrementally add tests that fail due to the absence of those things in the code under test.
Kent, this article is appreciated.
I teach and practice TDD. I’d not been consistent about using a list of test scenarios. After having read this article, I’ve changed how I do TDD.
I did a TDD of Fibonacci demo last week in front of a class. I think starting with a thought like “How many tests do you think we’re going to write and why” was a noticeable improvement especially for a problem like that where the input can be so easily partitioned.
I like knowing when I’m done. Alternatively I don’t like knowing that I’m not done. The list helps me.
I'm really curious as to how many tests people think it takes to implement a Fibonacci function. And how many does it turn out to be, when you do it, with TDD?
I would think that the "ZOM" part of "ZOMBIES" (Zero, One, Many, Boundary, Interface, Exceptions, Simple) would suggest *three*.
When I'm in a mood to be difficult, which I often am, I can drag it out to seven before I have to admit that I should stop playing games with "simple but incorrect" implementations, and actually just do it.
Interestingly, I don’t remember how many we used or decided. I am not certain, but I think it’s three. The thing I get excited about is the thought process you just did with ZOMBIES.
I remember watching Robert Martin demonstrating the bowling score TDD kata. being surprised at how few tests. He doesn’t try to analyze/list tests in advance.He likes the surprise factor.
However when I do it with the students we do Kent’s method of listing some tests in advance. We try to partition the example space. (Very lightweight and informal). I find that fun because we guess at the tests first and then explore our guess.
I'm not against listing planned tests, if that's what a person wants to do. And I think it can be helpful in the middle of TDD to make a list, if thinking about future tests gets to be distracting. But I'm almost never writing tests from a list.
I get my "seven test minimum" for Fibonacci by using a technique / game I call "evil coder" -- I don't just write the minimum code necessary to pass the test. Whenever "possible," I write a *known incorrect implementation* that passes all the tests.
I've long thought that one of the main reasons to write tests was to protect from some future maintainer accidently "breaking" the implementation. So if I can think of any "easy" way to get the implementation *WRONG* that would not be caught by the tests, I do it. And I insist that to get the correct implementation, we need more tests.
(I do impose a rule that the "evil coder" cannot do an implementation *more complex* than the expected or known correct implementation. Otherwise, we'd never get anywhere. There has to be a limit on how bad the "bad implementation" can be.)
Starting with that Fibonacci of zero is zero, these are my successive implementations, leading up to the 7th test "forcing" me to do a more reasonable implementation:
return -1; // always fail
return 0; // f(0) works. f(1) fails
return index; // f(0) & f(1) work. f(2) fails.
if (index < 2) return index; else return index-1; // f(0) to f(4) work. f(5) fails.
if (index < 2) return index; else if (index < 5) return index-1; else return index; // f(0) to f(5) work. f(6) fails.
if (index < 2 || index > 4) return index; else return index-1; // f(0) to f(5) work. f(6) fails.
if (index == 0) return 0; else if (index == 1) return 1; else return fibonacci(index-1) + fibonacci(index-2); // all tests pass
return (index <= 1) ? index : fibonacci(index-1) + fibonacci(index-2); // all tests still pass
Given that
f(0) = 0, f(1) = 1, f(2) = 1, f(3) = 2, f(4) = 3, f(5) = 5, f(6) = 8
there are simple relations between the input and output until "f(6) = 8" where I can no longer get the desired output with a simple change to the input value.
(Now as for the simplistically brain-dead recursive implementation being horrifically inefficient for non-trivial index values, ... Well, there are ways we could test an O(N) implementation into existence.)
Everything else is style/preference.
I found great peace when I began to believe this. I hope the people who learn with me have found some of that peace.
- We do some of these extra things because we're mostly learning the fundamentals.
- We do some of these extra things because we recognize patterns in our behavior that need particular attention.
- We do some of these extra things because we find the work more pleasant that way.
I 100% had to correct someone on their idea of TDD this week.
"I'm almost done writing the unit tests" and "I'm doing TDD" were two sentences uttered in the same context.
I asked..."ah, so you're almost done with this story then?" But no. I suspected as much.
Gave the benefit of the doubt and exhoed your sentiment "if you're writing all the tests first, you're not doing TDD. You're doign something else...maybe you've found that it works for you and that's fine...but it's not TDD."
I care that the job gets done, that we have a good outcome, well organized code, and well written unit tests to support future improvements to the code. At least the tests weren't an afterthought.
Sounds like someone who takes their responsibility seriously. That's good in my book. Maybe they can optimize from there.
No actual criticism of the post, just noticed this:
> surprising experience is that folks out there don’t agree on the definition of TDD. I made it as clear as possible in my book. I thought it was clear. Nope. My bad.
Martin Fowler did a whole writeup on it: https://martinfowler.com/bliki/SemanticDiffusion.html
Sadly, just because you came up with a term and defined it with utmost care, means nothing if the term becomes popular. People take it and run with it, rewriting the definition in their head based on vague understanding (or even more often, based on the name alone).
I don't mind if the definition drifts. Part of being successful. What I mind is people saying, "<this other thing entirely> suckz".
Love the clarity, Kent! Your breakdown of the TDD workflow is gold. Especially appreciate the emphasis on a clear test list—so often overlooked. 👏 Also, Vic Wu's flowchart adds a visual punch to your wisdom!
Thanks, as a technical coach, I do teach the idea of breaking requirements into possible tests.
In practice, I tend to create new tests based on the implications of the previous test.
I do feel like if the next test is "hard", as in "I can't see the simplest way to make it pass", and I can't see a refactoring to make it easier, then I usually look for an easier intermediate test, that will allow the algorithm to change enough to get the next test unblocked.
About this: "Mistake: copying actual, computed values & pasting them into the expected values of the test. That defeats double checking, which creates much of the validation value of TDD.".
I couldn't understand. Is this about pasting value in the code of the test, or into the code of the system under test? If it is the former, I fail to see the problem.
I claim that the former is a mistake. I write:
assertEquals(complicatedFunction(),
And think hmmm, what is the expected value? Oh, never mind, I'll just copy the value, so I finish:
assertEquals(complicatedFunction(), null)
Then I run it & get an error message:
null does not equal 14.75
Then I go back and write:
assertEquals(complicatedFunction(), 14.75)
I claim this is a mistake, or at least a missed opportunity. The programmer has missed out on the chance to think through the computation in two different ways. Double checking reduces the chance of errors but it takes time.
Does that make sense?
Oh, ok. I agree with you. I assumed the tested code was not written yet, and I was calculating the value manually, in a calculator, spreadsheet, etc. and copying that to the test. Now what you explained makes total sense. Thanks!
I'd like to add an additional mistake at step 2. Mistake : computing asserted values using logic that will then be used to make the test pass. I prefer to hard code values where possible. E.g if order line 1 is $2 and order line 2 is $3 then expected order total is $5.
In some cases, I do computed values instead of constants. But I insist that it must be done with a different, and hopefully simpler, algorithm.
For example, to test the ACM algorithms to convert between Julian dates (days since January 1st, 1970) and Gregorian dates (month, day, year), I tested some known date value pairs first, with constants, and then I implemented a "What's tomorrow?" algorithm. I know that January 1st, 1970 is zero. Add one to the day of month, convert to Julian and get one. And I know how many days are in each month. And how many months in a year. And how to determine if a year is a leap year; February has an extra day then. So, day by day, I can compute out both Gregorian and Julian values as far as I want. And then I use them to test the direct conversions back and forth using the ACM formulaic algorithms.
Why would I do this?
Well, how do I know that the ACM formulas don't have overflow problems in my implementation language?
(Well yes, I could do some moderately intense numerical analysis. But that's a bother.)
> Folks seem to have missed this step in the book.
It's me. Normally I just launches into coding.
It seems work for me to avoid over thinking at the beginning. I always have a chance to add new test cases to the list at step 3. I maintain the list in mind, some time a TODO comments.
Maybe write the list is helpful for others, I did see some guys miss the scenario after TDD through happy path. On the other hand people may think too much implementation details when write the test case list, and may code too much when pass the current test case since she cannot forget the cases in the list.
> The initial step in TDD, given a system & a desired change in behavior, is to list all the expected variants in the new behavior. “There’s the basic case & then what if this service times out & what if the key isn’t in the database yet &…”
> Folks seem to have missed this step in the book. “TDD just launches into coding 🚀. You’ll never know when you’re done.” Nope
Huh, I have the book and read it a long while ago, and yeah, this is certainly a step I've been skipping. I've certainly had decent results without it, but I trust you, so I'll give this a try.
I think for me, I personally get into day-dreaming mode. If I sit still I day-dream (sometimes it's about the billions of dollars I'll eventually have, sometimes it's about how I make this beautiful piece of software that everybody loves etc. etc.). So figuring out how to get to coding as quickly as possible gets me out of day dreaming mode.
(But I'm skimming through the book right now, and I don't see this mentioned. You do seem to be jumping straight into the tests. I'm guessing you're referring to the TDD by Example book?)
I also find slowing myself down and considering the list of behaviours the hardest part. I always itch to start writing code as soon as I've determined the first handful of behaviours. In my case, that is partly driven also by a concern that if I spend too long thinking about different behaviours I'm going to start doing a full design in my head or on paper before I start on a test. Maintaining the discipline of not trying to come up with a design to cover all behaviours up front is tricky for me.
The most common complaint I've received about TDD is that it leads to bad quality software because there's no attempt to construct well-designed code. The second is that it leads to poorly tested code - there's a belief that people who practice TDD don't bother to think about testing a wide range of behaviour or considering possible edge cases. In both cases it has invariably turned out that the naysayers are not talking about TDD, but about a subset that includes only one or two of the steps.
"protip: trying working backwards from the assertions some time"
So TDD the test then 😉
Thanks for this, it's like a puzzle piece I was missing for inside my brain.
I’m not sure that I understand this. Are you just saying to not write the code first but just its invocation in the test assertion as the first step? This will ensure that you start with a failing test.
I usually write the code’s method as a stub with no implementation, then poke the “Jump to test” which creates a blank test case for me. But I can see if you’re tempted to start to fill in the implementation then the other is better.
It's more than that. When I'm writing a test, I can get stuck on all the interface design decisions. I may know that the answer should be 4, but not know how to express the inputs. So following the "known to unknown" principle, I write:
assertEquals(actual, expected); // actually I can write this before I even know the answer is 4
expected = 4
assertEquals(actual, expected)
Then figure out how to express the inputs. Does that help?
In Sarah's (my partner) work as a math tutor, she used this trick quite frequently with students (usually teenagers) who felt stuck. She asked them a simple question: "What do you (already) know how to do?"
For the programmer trying to articulate the next example, that might be "Well, I know that I need to check an answer, so assertEquals(expected, actual)." Programmers routinely underestimate the confidence or hope that can give themselves by writing something like that down, even when they expect to erase it in 5 seconds.
Yes. Another example of starting with the simplest thing that could possibly work (or in this case fail), then building from there in small measured steps.
That's a very good question I was wondering about too.
Try to start with the test and poke the 'create method'-button. You'll see that writing the method/function call alone will give you lots of design-insight.
> The initial step in TDD, given a system & a desired change in behavior, is to list all the expected variants in the new behavior.
This is a good initial step not just for TDD but for any "situation" where an "outcome" is desired (good luck to anyone trying to achieve a specific outcome if it's not listed anywhere or defined very well)
Here is an issue it's all fun and rosy to talk in the abstract about system and desired change but I think it's necessary (but not sufficient; as there is still a lot of things needed to be done to get that desired behavior) to have a good grasp on the system to understand the complex interaction between completely separate parts.
A challenge is I can't provide a simple example to illustrate the above point (let alone prove the point) because by definition such a system I'm talking about is a complex one.
The system would have the classic issues with inherent complexity with its four horsemen of complexity, conformity, changeability, and invisibility (see Fred Brook No Silver bullet Essence and Accident in Software Engineering for more details).
I think in principle and in the abstract TDD can and is probably a great workflow, but the context is an important piece to consider to determine if TDD is a good workflow to use in the context or not.
For example, I claim TDD is more challenging workflow to follow in a new project when the system doesn't exist and isn't well defined.
Where rapid prototyping would probably be more applicable in such a context until the system becomes more defined.
Here is a even worse situation where TDD is very challenging to follow, there is an existing system and the system isn't well defined (multiple parties using the system differently and the system doesn't have validation to coordinate between those parties)
It goes back to the idea use the right tool for the job.
Just to clarify and make my point clear.
What I'm trying to say and want to emphasis at the end of day a specific problem is a problem and has it's own context.
As much as we can abstract and think of patterns and utilize workflow such as TDD, that initial first step of understanding the problem (to a certain extent I argue it's impossible to 100% understand certain problem *initially*, I can expand on this if interested)
One last thing, another purpose of my comment is to highlight this idea (this will be very abstract so hopefully it's clear)
An answer to a question might be not wrong **but** if the question is open ended and **adding** details to the question might in fact invalidate that original answer.
I can expand on this if interested.
Thanks for reading this rambling.
As you said it will come down to taking responsibility of one's own work.
> Take responsibility for the quality of your work however you choose, as long as you actually take responsibility.
The below is an Aside probably not worth your time to read as it's very much half baked thoughts but thought I would include it for the heck of it.
Some junk I wrote about how I see it to get desired outcome and involves the following formula
Net Desired Outcome Result = Expected Benefit of Following Defined Requirement * Probability of Achieving Such Requirement - Cost of Achieving Defined Requirement
Expected Benefit of Following Defined Requirement = Short Term Value of The requirement * Short Time Period + Long Term Value of The Requirement * Long Term Period of Time
Unfortunately I don't think I can provide a formula for "Value of The requirement"
Cost of Achieving Requirement = Sum of the cost of An individual with the following (Experience + Skill + Time + Domain Knowledge)
I want to go into more details on the above (and probably restructure it a bit differently but I'll leave it at that for now).