27 Comments

Another common theme to these sorts of discussions appears to be, what is the difference between writing the tests first and afterwards?

Working with a code base that has had the tests written afterwards can feel a bit like working with something you’re supposed to be able to take apart, but you can’t because whoever built it used glue to put it together.

Expand full comment
Jul 1Liked by Kent Beck

I feel just a little less intelligent for having read the referenced blog post - it's neither coherent nor, as you pointed out, logically sound. I'm not even a practitioner of TDD, but I do consider myself a practitioner of logic and reason and the blog post triggers that part of me.

Having said that, I suggest one answer to your "Why the hate?" Perhaps because the hate-spewers have had TDD forced down their throats by some zealot (aka, Big-A Architect, Director of Something-Or-Other, etc), and this is a natural reaction. I can certainly picture _myself_ pushing back and desperately grasping for supposedly rational responses to something that I have been forced to do by someone who may or may not actually have a clue about doing my job. I can even imagine myself writing or head-nodding to a logically flawed response such as the blog post, given that situation.

Expand full comment

It's a shame. I've heard many say they've tried and it didn't work for them. Some of whom also insist that writing tests is very important. Those tests often turn out rather complex and implementation-based. "it calls this specific method or else it gets the hose again"

Other possibilities:

* they're doing TDD but in a very difficult way and not getting the benefits.

* they're trying to fit in with someone/a group they see as a hero that also criticized TDD.

* click bait

Expand full comment

Thoughtful & fascinating response to the OP & ensuing comment thread.

Expand full comment

I've spent over 15 years teaching TDD. I did presentations, hands-on workshops, Dojos, even organized TDD conference (https://tddconference.github.io/). My love for TDD knows no bounds.

But my enthusiasm for TDD is usually met with loads of skepticism, if not downright hostility. I have come to the realization that it's not people's dislike of TDD, it is people's annoyance at my zeal. People hate zealots. People hate when someone joins them and proclaims: "You've been doing everything wrong. Here, let me show you the way." People hate the messenger, not the message.

So, in the end I may be the biggest barrier to adopting TDD for people who I've been leading, teaching, coaching. You can lead a horse to the water, but you cannot make it drink. Unless someone decides that he/she wants to try TDD for themselves, no amount of enthusiasm will rub off on them. As a matter of fact, the larger the zeal, the larger the fervour and enthusiasm, the bigger the chances that people will sabotage the efforts. And boy do they know how to sabotage it effectively.

Right now, my thinking is that only by being cool, by playing hard to get will I ever be able to introduce the change called TDD. But playing hard to get usually means no one will get me.

Expand full comment
author

What a useful thing to realize. However, playing hard to get isn't your only other option. You can listen closely to what folks need. When something TDD-like can meet their needs, you can introduce just that part. If they get an a-ha moment, you can encourage them to spread it around. Start with the need, not the solution.

Expand full comment

That's a very wise advice. I may be way off on this, but it is my observation that for TDD to deliver on the promised outcomes, the whole team needs to adopt it. Having champions/evangelists who are trying to lead the pack while majority decides to sit the whole TDD thing out, does not seem to work as expected.

So, the challenge is, how to get the whole team to buy in and give it a world? As I mentioned, anything I've tried only brought me stress and vexations. Yes, in every team there are usually one or two members who recognize the incredible value TDD brings and immediately become big fans of that discipline, but being few and far in-between, they cannot make a dent in the team. They willy-nilly go back to the status quo.

I now realize that change must come from within, out of people's own volition.

Expand full comment
author

I hear your frustration. The best template I have for change is those downstream of development demanding fewer defects & a champion developer offering TDD as a path.

Expand full comment

Interesting line of reasoning. It seems to assume that those downstream of development need some kind of a nudge to get them to the position of demanding fewer defects. That may be how things typically play out, I don't know, because I have a very limited view of the world. But so far, I've only noticed the opposite -- those downstream of development always being shocked that they see defects popping up. Those downstream of development usually assume that technical excellence is baked into the system; after all, that's why they have assembled a champion team. They spared no effort/time/money to pick the best experts. So, seeing defects popping up always comes as an unpleasant surprise to them.

There is nothing people value more than a good excuse, and my experience teaches me that teams are ingenious when it comes to explaining the defects. It all boils down to good excuses. And I've heard phenomenal excuses galore.

However, excuses are not a solution.

Expand full comment
Jul 1Liked by Kent Beck

How does writing the code then the tests vary compared to writing the tests then the code?

In your hill climbing or ratcheting analogy.

As I'm writing code for a startup, I want the important things to work well, but often I find we don't know what's important nor how users will use it.

So often I'll find myself writing some code, an example is sending a request for a review, a few days after the clients have used our users system.

For context we are a B2B SaaS platform doing document storage, payments and things for car dealerships.

When we found out that our rating system wasn't working at all for a certain type (after a car delivery). I went back and rewrote the code and wrote a lot of unit tests as I refactored... There's a whole story of how I hired someone to do this and they weren't up to the task and didn't test enough and their code failed spectacularly and started spamming users. Hence I had to rework their work.

It was fine that it was a little broken, not sending a review request. We now knew a lot better what was needed.

The issue of the person who tried to fix it thinking his tests passed and so didn't manually test it meant us then spamming people caused an issue.

The answer is more unit tests. But I still think there's an analogy in the story. I just need help extracting it.

When I wrote the original code it was for a single review type (services). So I ran up a small hill, leaving a small path as I went.

From the top of that small hill I could then see the other hills. The larger ones that had more prestige at the top.

But they came with more challenges.

As we added the ability to send review requests after a car delivery, Part purchase and other actions we now had a hill that was part track, part mud and part snow.

You need different gear for each those. Running on a paved track with ice shoes isn't very effective, but going up an ice wall in sandals 🩴 is a great way of getting your toes frozen off.

By having scaled the first mountain I could see enough of what great was needed and I mentally made note of what was needed and packed somewhat accordingly.

I managed to scale the larger mountain. But I didn't do so perfectly.

However the real issue was when I tried to get a more novice person to scale the larger mountain.

I hadn't left enough plans, maps or gear.

They didn't know the route to take or how I'd actually worked out that we should take a whole different route (refactor the code).

I'd spent a few hours telling them how I'd scale the mountain if I was them, then they said they thought they were ready.

So they attempted to scale the mountain and failed. Instead they caused an avalanche.

I think that like good documentation, good unit tests tell someone not just a vague plan, but get into detailed information about how to scale a mountain.

When you want to have thousands of people scaling a mountain, you'll want a full tour guide service. You'll want to lay paving and put up guard rails and make it very hard to fall off.

For important parts of your codebase, you'll want to have the full tour guide service.

But maybe there's still cases that there's experimental mountains that we can climb, see the top of, take some photos from and work out what the next mountains that actually matter are.

Of course there's many issues with the analogy.

I did hear one recently that I really liked. A startup (or company) is more like a sports team than a family.

With a sports team there's a know win condition, different positions in the team and support staff.

A family with parents and kids and growing up and just trying to live and enjoy life. It's a very different dynamic.

Expand full comment
author

I don't have a short answer, except a very personal one. Writing code after writing *a* test (not tests) gives me permission to solve one problem at a time. This relieves stress, increases confidence, generates energy, & promises quick satisfaction.

Expand full comment
Jul 2Liked by Kent Beck

Ohh, that's a great answer. Thank you.

To keep with the hill climbing analogy.

You are climbing a bigger hill (by writing unit tests) so you can better enjoy skiing down it.

If it's not a snow covered hill then maybe you are tobogganing.

Either way it's still more enjoyable.

Expand full comment

The analogy is very weak. It's like saying ”sound reduces to vibration, and vibration loosens screws, so machines that make sound can't work.”

Then everyone says "sure we're aware of vibrations in theory, but in practice the sound doesn't affect anything", and his point continues to be "but the vibrations..."

Expand full comment

I noticed that when someone says something is unachievable with TDD, they are confusing two things.

One is how to write a solution in a programming language.

The other is getting an idea about how to solve a problem.

Last time, I got a challenge from a skeptic who didn't think TDD could solve the 8 queens problem. He not only asked for writing the code from scratch in a TDD way but also didn't want to read the methods of solution from the wiki beforehand.

He wasn't comparing TDD with other programming methods, but rather mathematics and programming.

The 8 queens problem may be simple enough to be figured out during the TDD process. But something just too hard would be like asking someone with no knowledge of astronomy to TDD a simulation of Mars' orbit.

It's not a limitation of TDD because the difficulty lies not in the programming itself.

Expand full comment
Jul 1·edited Jul 1

The OP's argument made me think of "irreducible complexity" arguments from evolution deniers. (With this in mind, it was an interesting twist that OP ended their post by mentioning that they were convinced not to go into religion.)

Expand full comment

Posts ‘proving TDD cannot work’ make me think of proofs that bees cannot fly.

Some people don’t like to work in a TDD fashion. I haven’t figured out why, but I can live with it. I feel like it is there loss because I love playing the TDD game. I wonder if there is a big overlap between those people and the people who don’t value distinguishing behavior from structure?

Another qjuestion, there are some people intent upon buzzkilling my TDD buzz. I haven’t figured out why.

Expand full comment

TDD is like driving your car fast with a safety belt.

Expand full comment

Thanks for this post, Kent! A few thoughts of my own.

Some of the hate and misunderstandings come from strawman arguments that get incorrectly inferred from the way TDD sometimes gets represented.

Some TDD advocates make it indeed sound like you can use it to come up with algorithms in the first place, like Robert Martin showing how he'd use TDD to arrive at Quicksort.

And I think that's just not correct for any complex algorithms. I know YOU don't make that claim, but some people THINK that TDD claims that, if you just blindly follow the simple step of writing tests for your inputs and desired outputs, you'll fall ass backwards into correct and complicated algorithms.

That might work for simple localized/greedy/divide-and-conquer algorithms, but won't work for mathematically complex algorithms where the first step is to have an insight into the correct, and complicated, data structure to use. You'll have to think first and come up with the algorithm on pen and paper. Then, of course, you use TDD to make sure all your edge cases are covered etc.

In your post, you say that TDD does indeed do hill climbing, in that each step locks in another set of inputs for which you get the correct output: First, we get the correct total value of items in the shopping cart. Then we get the correct total value when a coupon is applied. Then we get the correct total value including sales tax etc etc.

This works if the set of inputs can be reasonably partitioned: Correct output for an empty cart. Check. Correct output for a cart with one item and no coupon. Check. Correct output when the amount qualifies us for free shipping. Check. And so on and so forth.

It doesn't work when there is a sort of discontinuity where your code goes from not working on any input at all to working on all inputs in a single global step with no reasonable steps in between.

An easy example for this happens in machine learning. If your task is to write a program that identifies dog breeds, you can't just start with a blank slate and write a test that attempts to lock in correct outputs for a subset of inputs ("Correctly identifies if something is a poodle. Correctly identifies if something is a wiener dog. Correctly identifies a labrador") and then ratchet your way up by adding additional tests. Instead, you need to train a neural network and let the learning loop of the network do the gradual, global, ratcheting up of correct outputs for given inputs. (I'd note that when wiring up your neural network and the training loop, you very much should follow the principles and guiding ideas behind TDD. Small steps. Verify often. Ratchet. Andrej Karpathy has a great post about that.)

Of course that is not a failure or fault of TDD, just a misunderstanding of where in the overall process of problem solving it should kick in. Think first, code second. And when you code, test first.

Expand full comment

From my experience, TDD might not give you the optimal solution when a developer does not have enough experience. By optimal solution, I mean computational complexity and time effort.

For example, if a developer chooses the functional paradigm and is not Chris Okasaki, he might not find any good solution. If, at some point, he decides to switch to an imperative model, the time effort metric will be screwed up—it is just a waste of time.

For instance, once I was using TDD on an XML to JSON converter. There were SAX and DOM parsers available. At first, I've chosen SAX. But after several months, I understood that some critical features require full structure analysis somewhere in the middle. Therefore, the DOM approach was required.

At this point, there were no good way to refactor the SAX-based code to DOM—I had to keep both: first SAX, which renders the home made DOM along with the rest features, then DOM processing, slowly shifting features from the SAX part to the DOM one. If I knew these caveats before my first tests, I would have rendered them as DOM, not SAX.

In the end, I had a suboptimal solution, as the time budget was restricted.

Expand full comment

Why the hate?

I agree with some of the comments here, someone forcing TDD down on you, or seeing coworkers miss using and making a big mess of flaky tests full of mocks. None of those are valid arguments against TDD itself, they are against the person that forced it or does not really know how to use it.

Every time I see a critique in TDD they are always against points that Kent Beck never said, and they miss all the good things he does mention.

Expand full comment

1) Suggestion:

Perhaps the OP didn't get the "outside in" approach that led to BDD. If you test at a very low level (internals, implementation details, etc.) it's much more difficult to refactor a program without breaking tests.

Did the OP ever define how to TDD in the post? If not, they might want to back up a bit a start there.

2) End to Beginning Programming:

# tests

update_balance_test:

bal = update_balance()

assert.not_null(bal)

# impl

update_balance():

return new Balance()

I just started at the end. Actually, this is a great way to start a program! I've found that software, like a maze, is often easier when you start with the desired outcome and work backwards. I think many engineers/teams get into trouble when they start a design at the beginning.

"ok, so the user comes here and creates an account. First thing we need is all the user account stuff."

--- 3 months later ---

"great! user account stuff is finally perfection...now let's start building the first part that makes this app valuable."

--- oops, ran out of funding/momentum/time ---

Expand full comment

I think that their comments are correct, given the way that they have defined and restricted TDD.

But that's not the way that it's generally taught. And that's not the way that experienced practitioners use it.

TDD is clearly a "hill climbing" algorithm if you eliminate the Refactoring step. In fact, you'll get stuck at local maximums *Very Quickly*.

In the referenced post, Refactoring is limited to ONLY "eliminate duplication." With a lack of insight and inspiration, that can be a *Very Severe* limitation. Refactoring based on "Code Smells," in general, can and often do involve "structural changes." And maybe even "redesign." And all of this is typically "safe," in that your existing tests ensure that you preserve *ALL* of the desirable existing behavior.

.

Also, the poster *provides no alternative.*

One has to assume that, as an alternative to TDD, the standard solution of "Have an experienced architect survey the landscape, determine the absolute optimal maximum, and give you the design for that."

That's an appealing fantasy. But I've never seen it work out in reality. No one has perfect knowledge -- of the problem, of proposed solutions, or of all the varied interactions between components, in complete detail.

*In practice*, however, with TDD, if you can envision a better way of doing things -- a non-local maximum -- you can get there. And you can get there *SAFELY*.

Yes, doing so may break some current "poorly written" tests. But we can "fix" those tests -- improving their design. If we're disciplined, we can fix the tests *before* doing the refactorings. But "after" often works anyway. We can be pragmatic.

Expand full comment

Maybe the problem is that when TDD is explained, it is told as a logical sequence of steps because that's the easiest way to explain the idea.

Even the simple act of moving a method between two classes cannot be done without breaking all tests for that method (ok, if you're really clever you might even pull it off. but it would be unnecessarily complicated). Does OP think TDD forbids moving methods?

Expand full comment

I haven't had any problems moving methods between classes without breaking any tests at all.

Moving a static method is trivial.

Moving a method to a class passed in a parameter [a common situation] can be done by making the original method forward to the new method, and then updating callers. Or one can update all the callers in the first refactoring step. Neither is really hard to do.

Expand full comment