16 Comments
May 17Liked by Kent Beck

In the world of web deployed, low criticality software, faster is better for all of the reasons you mention. Another thing shorter deployments get you is more use cases. No matter how hard you try, some user somewhere is using your software in a way that you would never think of, so you don't check for. Shorter cycles teach you about those things and let you course-correct.

But when you're dealing with things that aren't updatable (such as embedded systems) or safety critical systems that involve physical things moving in the real world, the cost of failure is higher, so deploying (shipping) to the public fast isn't possible/desirable.

That said, even in those cases, deploying to internal test systems should be as fast and simple as possible so you can do those real-world tests quickly.

Expand full comment
founding

Good reminder for me. Thanks. I'm about to transition from the web/mobile world into the hardware world in my new role.

Expand full comment
founding

The acceleration benefit I value the most is reduction of risk the way it's described in "Accelerate." I think some of that is wrapped up in your Scaling section. For pitching to the skeptical (and the suits), it might be worth describing it separately.

My random thoughts: "Accelerating to the point where deployments can only be automated is a forcing function to de-risk every deployment by adopting more resilient patterns. Failure rate is always non-zero. If you don't deploy enough to know what YOURS is and how to gracefully recover, your next deployment could be The One that breaks you bad."

Expand full comment
May 19Liked by Kent Beck

Another benefit of smaller cycle times: psychological safety. No need to have nightmares about that pending deployment given how that one in the past failed so horribly.

Expand full comment
May 20Liked by Kent Beck

One thing I always want to add to the more-frequent-deployment-conversation is that unconstrained deployment is a TERRIBLE customer experience, especially for SAAS products/customers. More often than is comfortable (and I've done this more often that I'd care to admit as a software professional) I've seen engineering teams move to rapid/continuous deployment without considering the customer's actual experience and putting systems in place to manage that experience. Those systems include software systems (emails, tutorials, help bubbles, experiment systems/runtime configuration, etc) and institutional systems (communication with support and CSMs, sales/marketing, etc).

Imagine you're a customer who uses some tool, everyday. It is a critical part of your job. Now, you've got a tight deadline and a hot deliverable, and you go into the tool you use every single day, ready to knock out this deliverable. Only ... the screen is slightly different. There is a new feature, new buttons, the interface is different. And there is nothing on the screen that is telling me what is different. I didn't have the option to opt into this new functionality. And now I'm a panicked customer, because you've upset my workflow with no communication. They call their CSM or support number; and they don't know what happened either. Everyone is confused, and that's a really bad experience. That is the least intrusive case; it can get progressively worse as the changes become more fundamental.

Expand full comment
May 22Liked by Kent Beck

Communication is not enough - people don't read. Even if you entirely remove a labor intensive step, someone somewhere will be confused and call support, or complain that you are breaking their automation (insert XKCD).

I've received support requests with screenshots showing a big red banner, explaining what is happening and what should be the coirse of action. The user was fuming that they spent over an hour trying to work around, but didn't read read the banner because they assumed it is spam.

And then of course, nobody reads error messages or confirmation boxes.

Expand full comment
author

There's a myth that the appropriate level of user discomfort is zero. Anything that moves the needle off zero must therefore be wrong. I see it as a tradeoff. We are responsible for minimizing the pain. We are responsible for making sure the pain comes with sufficient payoff. And then, yes, we are responsible for keeping CX, sales, marketing, finance & everyone else affected apprised of what is happening. Not easy, but boo hoo.

Expand full comment

been that customer too many times to count.

Expand full comment
author

I've seen this scenario play out too. When incentives are misaligned, you get misaligned behavior. It *is* possible to continuously deploy features well, though.

Expand full comment

I 100% agree, it is possible. But it has to be intentional, and it has to be planned and communicated with the rest of the organization. (And in my experience, usually it is not.)

Expand full comment

This is an example of a product team prioritizing their own needs over the customer's, and in larger companies the effects on customer experience could be disastrous. This is why I am a firm believer in double loop learning and continual improvement, but a lot of senior management and execs seem to think that their job is just to pull levers until profit goes up.

Expand full comment

It is eluded to in a couple of the reasons you mention, but to put it plainly - increasing deployment speed is also a forcing function for everything else happening upstream and downstream. CI must run faster now; you no longer have time for manual QA so automation has to be introduced when there was none before etc. It forces everyone to up their game.

Unintuitively, increasing deployment speed tends to lead to quality improvements all around.

Expand full comment

Excellent insights as always. With regards to competition and the OODA reference, in my view it is not really about how fast you can push new features out. After all, there is a limit to how much change customers can absorb in a given period of time or how quickly your organization can gather feedback and learn. Rather, it is much more about how quickly your organization can test and validate ideas so that the features you do release work really well for your customers. Doing that inside the management OODA loop of the competition is what makes you win.

Another potential challenge with the OODA argument for reducing deployment cycles is that people don't always understand the difference between deployment and release, and generally focus on the latter because it is more visible and generally tracks the learning cadence of the organization (product and process), particularly if they use time-boxed frameworks like Scrum rather than flow-based development.

Finally, there may be little value or at least quickly diminishing returns in increasing deployment frequency beyond daily, already in the the domain of hours, which is generally sufficient to avoid merge hell and associated waste and burnout.

Just some thoughts. I hope you find some of them useful.

Expand full comment

A couple additional thoughts:

1. Faster deploy means faster learning. Our team deploys small tests constantly to validate hypotheses. I think of this as compounding interest. Tightening deploys shortens the time to accrue interest, which increases the compounding effect

2. The capacity for faster deploy changes the thinking around feature launch strategies. Instead of taking high conviction big swings in a single go, you can space out the pieces to test the highest risk parts first. This keeps your options higher for longer (optionality!). An example is if design churns heavily, you can deploy the pieces that are less likely to churn first and keep the high churn one way doors open. I’m writing about this right now actually! Your book’s chapter on optionality completely changed my brain!

3. Feature launches are constrained by the slowest deploy. For example, web can launch quickly, but cross functional features with mobile need to launch with the mobile deploy timeline which is slower due to App Store reviews. Tighter deploys means faster overall responses to churn, bugs, feedback, etc across platforms

Expand full comment

A benefit I have come to greatly value is renewing stakeholder trust.

At the outset, stakeholders take a leap of faith: engineering will go off and do their thing, and hopefully come back with valuable software.

Over time the faith decays. Maybe we should cancel the project and ask them to work on something else or terminate the contact? Maybe someone else can meet our needs more quickly?

Each time engineering releases a valuable increment, stakeholders see that their faith was founded and become willing to take that leap once again.

Expand full comment

Immediate thought: user/customer satisfaction. Maybe it's subsumed by "competition" but see me out.

Let's say your bank deployed some update. And that update made some customers unhappy. And those customers were unhappy about a couple missing or poorly designed features. Let's looks at what happens with different cycle times.

(btw, I also think about "cost of change" in addition to cycle time)

Annual - users have to wait very long or never to see the change.

Quarterly - users have wait months or years or never to see the change.

Monthly - users have to wait maybe a month or two to see the change.

Weekly - might be getting into accaptable territory, if the change ever happens.

< weekly - more likely to see the change.

Why does cost of change matter? A user gives feedback in some sort of intake that goes where? Support queue? That's stop 1. From there it might go to some product function queue (stop 2). Then if your lucky it goes into a planning queue (stop 3) where it'll get some people talking about it in a committee or whatever. Then some design and planning for implementation. Then spme work to make it happen, plan to release it, test it, do code reviews, security reviews, legal reviews, revisions, etc. How many people touched that piece of feedback between receiving and improving?

What if the change is seemingly simple? Does it lower the cost?

How about a change that's simple to describe, include in the "todo" queue, but difficult to implement? How does the cost change?

How can you lower cost of change in different scenarios?

Expand full comment