29 Comments
Apr 4, 2023Liked by Kent Beck

For synchronous/non-blocking, one possibility is partial pairing. I've often seen teams pair on what they consider to be significant changes but do solo work on what they consider small things.

Expand full comment

"then you’d be notified in detail & immediately" Definitely curious about this :)

Expand full comment
Apr 4, 2023·edited Apr 4, 2023Liked by Kent Beck

Think. Experiment. Share. -> most important part imo.

These arguments usually come from the desire to improve (certainly why I keep starting those arguments a lot). The part that a lot of people miss is, that is is ok to try different things and that it is also always ok to go back to the way you were (while keeping the J curve in mind - some techniques require some practice before you can truly say they didn't work).

Certain techniques have a good likelihood to work. Small change sets are less risky, easier to debug etc. That doesn't mean they are immediately applicable - sometimes other process improvements would have to be made first to enable to true power of small, frequent changes. But if all you ever do is argue over the best way to do X or Y, nothing is ever going to improve. As Jeffrey Fredrick said, if you want to be better tomorrow, you need to be different to how you are today. The only way to do that, is by experimenting.

Expand full comment

It is imperative to establish a psychologically safe environment in which experimentation can occur. For instance, while I acknowledge the significance of pair programming in facilitating a thorough code review during changes, our organization currently lacks the opportunity to experiment with this practice. It is necessary to educate our client about the benefits of pair programming; however, they appear to be unreceptive to our suggestions at present.

Expand full comment

One thing I try to convince people who can’t stop arguing to do is to only look at possible hurts. Instead of convincing each other of what’s best, if it doesn’t hurt then try it. Doesn’t always work but sometimes it has broken the deadlock

Expand full comment
author

That's the reversibility argument. I use that too. Sometimes helps, sometimes it's more important for people not to change their minds.

Expand full comment

Indeed, sometimes people just can’t get over themselves can they

Expand full comment

I think there's a potential other dimension related to proximity. Working within a team on the same codebase is one thing, working across teams or groups can be quite another. The overhead of coordinating reviews and feedback on PRs, especially across teams and groups can be considerable effort.

Expand full comment
May 30, 2023Liked by Kent Beck

"I think there is fruitful space to explore in the bottom right.". We currently work on this topic and created a "Practices review" format (asynchronous, non-blocking, with the whole team at once). This idea is to discuss once per week with our team about various topics and best practices identified during the code reviews (or directly while coding). This format is used a a lot of teams in France and the feedbacks are really encouraging. It helps teams to have less comments in code reviews while sharing knowledge more efficiently. I discuss about this format during a webinar recently (with a terrible english accent): https://youtu.be/cqxtE-DcvcA?t=2296

Expand full comment
Jan 2Liked by Kent Beck

would TCR test && commit || revert workflow fit in Sync / Non-blocking?

Expand full comment
author

It can’t be synchronous because you aren’t waiting for a review before you deploy (well, not with Limbo anyway).

Expand full comment

Thanks for the post.

In our team we use "Ship / Show / Ask" strategy and it works pretty well

Expand full comment

Regarding the large pull request time to merge time. This analysis on open source project leads to a different conclusion. I guess thing are different inside a company.

"Our analysis shows no relationship of pull request size and composition to the time-to-merge, regardless of how we partition the data: day of the week the pull request was created, affiliation to industry, and programming language."

source: https://arxiv.org/pdf/2203.05045.pdf

Expand full comment
author

Interesting! I trust data more than my own opinions.

Inside Facebook back when there was definitely a correlation in the data.

Expand full comment

Interesting, something I need to read. Seems to go against what Dragan Stepanovic discovered on his own research https://www.youtube.com/watch?v=fYFruezJEDs

Expand full comment

Not sure it’s going against. Need to watch the entire video 🙂 The paper is focusing on open source projects so within company it’s a different context except if you are going for an open source like collaboration mode - async with little trust.

Expand full comment

Doesn't really surprise me because with open source people typically aren't getting paid to review PRs, so it's much easier to let even small PRs wait in general. Larger PRs, for a new feature let's say, are going to get a lot of discussion from core contributors prior to implementation, so there's less back and forth once the PR is created.

Expand full comment

Did you start to implement the your newsfeed like review at the end of the post? I'd be interested to hear more about that.

Expand full comment
author

I didn’t. How I would proceed is hire 2 programmers with good writing skills and have them write a daily summary of activity on popular open source projects. Once they had experience with the kinds of summaries people find interesting, they would start writing tools to automate the process.

Expand full comment

I would suggest to puzzle in AI direction, for example use GPT to summarise PRs, I think Github Copilot X for Pull Requests have something similar.

Expand full comment

Very interesting article, especially the "dimensions of variability" part. I'm wondering how it connects with https://martinfowler.com/articles/ship-show-ask.html. Would it be "ask" = "blocking/async", "ship" = "nonblocking/async", and "show" = "nonblocking/sync"?

Expand full comment

I'm surprised about cataloguing pairing as blocking. The "review process" of pairing happens as you are working, therefore, once the code is done you can push into production, nothing has been blocked, no additional step that needs to be done. I see that as one of the biggest advantages of pair programming. So I would catalogue that as Non-Blocking Sync.

Expand full comment

I think the idea of blocking here is a change that needs a second pair of eyes because it's high risk or not easily reversible. If it's done while pairing then it's already had that second pair of eyes and doesn't need a separate review step - so blocking but unblocked synchronously through pairing.

Expand full comment

The question mark could be an AI based IDE tool that comments /highlights problems, but unlike a human pair, can be easier ignored.

Expand full comment

"I think there is fruitful space to explore in the bottom right." Did you mean to say "the top right"?

Expand full comment
author

No, I mean asynchronous, non-blocking review.

Expand full comment

Very good observations. I wonder what do you think about how DORA metrics and how trunked based development affects this train of thought.

https://cloud.google.com/architecture/devops/devops-tech-trunk-based-development

Expand full comment

To break the Time/Delay/Size cycle, some places use a “stacked diff” system that lets PRs be submitted for review while subsequent work accumulates to a later PR. This is awkward with GitHub but there are things like this to help with it: https://cord.com/blog/stacked-diffs-on-github-with-spr/

Expand full comment
author

Stacked diffs, if supported, help, but at the cost of potential coupling between the diffs. If a review of an early diff requires changes to later diffs, then that costs me "extra". Depends on how often that happens & how expensive when it does.

Expand full comment