You'd have to combine it with some risk measures of the code. An automated scan that flags high risk changes. Example, potentially infinite recursion...that would get flagged. Any changes that are boilerplate just get automatically hidden from the feed. Formatting only changes, hide em. Someone committed a 2000 line file that was not autogenerated? Push it into the feed. And so forth
I'm confused (again). in my personal experience (and hence what i advocate to others), if I'm pairing, and switching pairs prolifically (or mobbing), it renders code reviews irrelevant (because they are being done continually). is there a case where you would still want to do "non-blocking asynchronous code reviews"?
When I hear/read “If we don’t review code before it’s integrated we’ll never review it.”, I have to wonder if they really see any value to the code review process. If the only way to make them do the process is to make it a blocking operation, then it sound to me like those doing and managing the work lack motivation and cost justification to do it just for the alleged value of it. This strikes me as "crisis-driven management" -- getting things done by making them a crisis. And not by demonstrating their value.
It’s keeping some of the value of code reviews, but not all - at least in my understanding of code reviews.
Imo code reviews should be synchronous: two devs sitting together discussing a piece of code. If, however, code review is only asynchronous, CodeFeed could indeed fully replace them.
The important aspects I see missing in asynchronous code reviews is the discussion, tue learning, the understanding a different approach.
If possible, I’d always opt for pair or mob programming. Besides other benefits it also has a built-in code review. Am I too naive? Would that miss other benefits of code reviews?
I love the idea and I see how this could improve workflows. I need to be very careful though not to think too much about it, as I would put away all my current stuff and start playing it. 😂
An incremental step might be to reframe what a code review is. Some go on mythic quests of "code quality" which really means, structure it the way I would structure it. To make it less of a battleground, it could be reframed as asking:
1. What is the intention of the change?
2. Does the code match the intention of the change?
3. Are there any bugs or unintended consequences of the change?
On a case-by-case basis, code review can go deeper into providing feedback, but the effort of the code reviewer should match the effort of the person who put the change-set up. A specific example is when in a mentorship position with the specific individual.
Matching effort must also factor in timeliness of code reviews. If it takes a week for you to review a change-set, and you have blocking feedback, then that should be seen as a failure of your responsibility to provide timely reviews.
On the other hand, if someone gives you a large change-set. Then it should be understood that it will take you longer to review. Again a principle of matching effort. Change-set size should be measured in the complexity of the change, and coupling becomes an important factor in this.
I don't think that pairing is a full replacement for code reviews (though it has other benefits). Even code reviews with the other dev sitting next to you are worse in my experience because you don't take yourself the same amount of time, patience and concentration/focus that you would apply while being alone. The only question is if the better review is worth the drawbacks.
Also, information like "the structural change you made is spreading to other systems" might be even hard to detect by other developers, especially external ones who are not part of the team (like you proposed), let alone AI. Maybe there is even some implicit intent of your change violated that you yourself haven't been aware of or didn't know others were unaware of. If people are not able to manually fake such a system, it looks to imagined for me... Not saying that I wouldn't like to use it if it existed ;) Especially as it allows to follow changes without participating in each review.
I'm wondering if it would be a more realistic approach to separate the main/trunk from the notion of releasable/done.
If everyone could integrate into mainline without a review (either directly or after automated ci-checks) but the system would still keep track of which commits have been reviewed, you should be able to remove part that is blocking development.
You could then bundle up commits from mainline into a review (or other devs would pick them themselves) and have a classical review of it (other dev taking their time etc). Only reviewed commits can be released and thus the underlying issues would only be done when all commits referencing it have been reviewed. Reflecting the actual progress. This would keep the benefits of reviews and would allow existing processes and compliance-foo to stay as they are while unblocking developers from waiting for code to arrive at mainline and removing incentives to create large branches etc. Additionally, tools for such a workflow exist (at least jetbrains has one where you can create reviews for arbitrary commits of the mainline).
I'm wondering which actual drawbacks would remain.
More I am reading this more I am being confused! Non-blocking code review 🤔 can someone boil this down to a few words and say what are we trying to achieve by this? Saving Time?
A good repository with good rules about commit comments would go a long way here. As well as a programmer who understands VCS very well, and how to be able to summarize changes.
As for me, that would not work in any way. My team appreciates code reviews, because they provide knowledge sharing and allow catching many misconceptions at very early stage. Still, the motivation to review merge requests tends to be low. Begging like "guys, let's merge it, those changes are dependencies for XYZ" is often required.
Loosening that necessity, that time constraint and therefore pressure, would deteriorate the idea of code reviews to minimum or even zero.
How do you solve the organizational incentives needed to implement a nonblocking code review? Once the code is merged, engineers can either pay off tech debt or launch higher profile items. The prioritization is handled by the PMs who themselves appreciate the tech debt less.
Idea: CodeFeed
You'd have to combine it with some risk measures of the code. An automated scan that flags high risk changes. Example, potentially infinite recursion...that would get flagged. Any changes that are boilerplate just get automatically hidden from the feed. Formatting only changes, hide em. Someone committed a 2000 line file that was not autogenerated? Push it into the feed. And so forth
I'm confused (again). in my personal experience (and hence what i advocate to others), if I'm pairing, and switching pairs prolifically (or mobbing), it renders code reviews irrelevant (because they are being done continually). is there a case where you would still want to do "non-blocking asynchronous code reviews"?
When I hear/read “If we don’t review code before it’s integrated we’ll never review it.”, I have to wonder if they really see any value to the code review process. If the only way to make them do the process is to make it a blocking operation, then it sound to me like those doing and managing the work lack motivation and cost justification to do it just for the alleged value of it. This strikes me as "crisis-driven management" -- getting things done by making them a crisis. And not by demonstrating their value.
Nice idea.
It’s keeping some of the value of code reviews, but not all - at least in my understanding of code reviews.
Imo code reviews should be synchronous: two devs sitting together discussing a piece of code. If, however, code review is only asynchronous, CodeFeed could indeed fully replace them.
The important aspects I see missing in asynchronous code reviews is the discussion, tue learning, the understanding a different approach.
If possible, I’d always opt for pair or mob programming. Besides other benefits it also has a built-in code review. Am I too naive? Would that miss other benefits of code reviews?
I love the idea and I see how this could improve workflows. I need to be very careful though not to think too much about it, as I would put away all my current stuff and start playing it. 😂
An incremental step might be to reframe what a code review is. Some go on mythic quests of "code quality" which really means, structure it the way I would structure it. To make it less of a battleground, it could be reframed as asking:
1. What is the intention of the change?
2. Does the code match the intention of the change?
3. Are there any bugs or unintended consequences of the change?
On a case-by-case basis, code review can go deeper into providing feedback, but the effort of the code reviewer should match the effort of the person who put the change-set up. A specific example is when in a mentorship position with the specific individual.
Matching effort must also factor in timeliness of code reviews. If it takes a week for you to review a change-set, and you have blocking feedback, then that should be seen as a failure of your responsibility to provide timely reviews.
On the other hand, if someone gives you a large change-set. Then it should be understood that it will take you longer to review. Again a principle of matching effort. Change-set size should be measured in the complexity of the change, and coupling becomes an important factor in this.
I don't think that pairing is a full replacement for code reviews (though it has other benefits). Even code reviews with the other dev sitting next to you are worse in my experience because you don't take yourself the same amount of time, patience and concentration/focus that you would apply while being alone. The only question is if the better review is worth the drawbacks.
Also, information like "the structural change you made is spreading to other systems" might be even hard to detect by other developers, especially external ones who are not part of the team (like you proposed), let alone AI. Maybe there is even some implicit intent of your change violated that you yourself haven't been aware of or didn't know others were unaware of. If people are not able to manually fake such a system, it looks to imagined for me... Not saying that I wouldn't like to use it if it existed ;) Especially as it allows to follow changes without participating in each review.
I'm wondering if it would be a more realistic approach to separate the main/trunk from the notion of releasable/done.
If everyone could integrate into mainline without a review (either directly or after automated ci-checks) but the system would still keep track of which commits have been reviewed, you should be able to remove part that is blocking development.
You could then bundle up commits from mainline into a review (or other devs would pick them themselves) and have a classical review of it (other dev taking their time etc). Only reviewed commits can be released and thus the underlying issues would only be done when all commits referencing it have been reviewed. Reflecting the actual progress. This would keep the benefits of reviews and would allow existing processes and compliance-foo to stay as they are while unblocking developers from waiting for code to arrive at mainline and removing incentives to create large branches etc. Additionally, tools for such a workflow exist (at least jetbrains has one where you can create reviews for arbitrary commits of the mainline).
I'm wondering which actual drawbacks would remain.
More I am reading this more I am being confused! Non-blocking code review 🤔 can someone boil this down to a few words and say what are we trying to achieve by this? Saving Time?
A good repository with good rules about commit comments would go a long way here. As well as a programmer who understands VCS very well, and how to be able to summarize changes.
As for me, that would not work in any way. My team appreciates code reviews, because they provide knowledge sharing and allow catching many misconceptions at very early stage. Still, the motivation to review merge requests tends to be low. Begging like "guys, let's merge it, those changes are dependencies for XYZ" is often required.
Loosening that necessity, that time constraint and therefore pressure, would deteriorate the idea of code reviews to minimum or even zero.
How do you solve the organizational incentives needed to implement a nonblocking code review? Once the code is merged, engineers can either pay off tech debt or launch higher profile items. The prioritization is handled by the PMs who themselves appreciate the tech debt less.