I am generally HIGHLY resistant to paying for content subscriptions. It’s not that I am cheap; it’s just that I love / want so much variety in content that it can become extremely expensive. But, a week ago, I took a paid annual subscription to Kent Beck’s Substack because I have so much regard for his thinking and writing over the years. This two-part post is worth the entire year’s subscription! Thank you, Kent!
Changing the perspectives so that software development & delivery is viewed more as a series of exploratory attempts is a massive organizational culture shift and likewise means a radical change in perspective for most managers. [To be clear, it's a perspective I totally agree with.] The word "engineering" has for so long been synonymous with cut-and-dried, precisely measurable things. I suspect that word in "software engineering" is an obstacle in changing the perspective. I don't have an alternative and would love to hear ideas. Even if the alternatives are 'only' used to introduce and explain these issues to C-levels and managers, etc.
I've been along for this ride in software development ("engineering") since the late 80s, always with a strong interest in the very human side of things. (To the extent that I also majored in psychology.)
Humans and our systems (cultures, societies, communities) are what makes software development complex: the interactions, communications, and -- as these 2 posts show -- the very difficult realm of performance and rewards (which means some sort of measures) are, in my not-so-humble opinion, vastly more complex and difficult than any technical endeavor.
"You get what you measure" is such a fundamental mantra in science -- and especially psychology. But C-level folks and managers are most often measured on short-term metrics (e.g. quarterly stock value or profits) and so that is what they are incentivized to maximize. (Not to mention many don't know of or understand this fundamental concept.)
In reading this, I am struck by other areas where performance is also difficult. For example: measuring the performance of teachers in US public schools. (Teacher performance is affected by a large number of forces beyond their control, namely the capabilities of each individual student that happen to land in their classroom in any given year, the number of students they must teach, etc.) Or measuring the performance of individual health care workers (ex: nurses in a hospital ward).
Extreme examples abound of gaming the system because of ill-thought out rewards. Ex: The US bank Wells Fargo rewarded employees for opening new accounts. So some employees figured out how to open accounts for real people without their knowledge or consent. (And were ultimately discovered and sued in court.) In Georgia, teachers were rewarded based on standardized testing scores. Teachers altered student answers on tests in order to improve test scores. This scheme was uncovered and ultimately many were taken to court. https://en.wikipedia.org/wiki/Atlanta_Public_Schools_cheating_scandal
Those examples show just how much pressure there is to both come up with metrics and how often that leads to really awful metrics and thus awful incentives and awful results.
- - -
[As a slight aside, I am curious if there are examples other than sports that can also be used as examples where there are metrics for individuals, individual contribution to team performance, _and_ overall team performance. Perhaps there aren't examples that would communicate the ideas so clearly. Perhaps there aren't examples that are so universally understood.]
Thought a great article series, it would look better without phrases such as "My test-driven development book...." because it can come off as: "I'm writing this article to sell my book" and not to make a sincere point about the issue, in translation an ad. For someone who comes without this knowledge (maybe a C-level exec?) and who read also the McKinsey statement (who probably gave the statement to sell their services) could say: "They just want to sell their stuff and get money, but don't actually care about helping improve our companies outcome and impact", as does the sales person gaming the system in the article. Kent is someone who has immensely contributed to the improvement of software engineering, and this debate could also improve it further, but adding stuff that could give off hints that it's done for personal gains, even if it was in part, could damage the whole argument.
Seems to me individual “performance” should be measured on 2 axes: 1) team performance; 2) individual growth. Ideally, these should be “managed” by different people.
1) Everyone belongs to a team. Day-to-day the team holds each other accountable to performance, ie achieving outcomes or impact. Optimally one can draw a line from team outcome and organizational priorities. This measurement is pretty objective and is useful for lessening “gaming” (less likely for all team members to be in on it).
Teams are formed based on the requirements of the mission, which will vary from low uncertainty (requiring less differentiated skills) to high uncertainty or complexity (requiring interdisciplinary or cross functional skills).
2) Career advancement, mentoring, and skills development is managed separately by someone who is not responsible for the team outcomes. This gives the individual the opportunity to be the best they can be.
I don't know all the details of how recruitment and sales departments normally work but, I think it is sure there's an impact of their work and decisions over the engineering team, and probably that it is not being measured. I remember how in one of the companies I worked (sadly they worked under a quite competitive+toxic scheme) the sales department was putting in compromise continuously the engineering department promising to the clients things that were far more expensive to achieve than they were offering, and without asking for a second opinion, just to make the selling and put a star on their achievement's list. Also, probably it is not being measured how the choices done from a recruiter impact the performance of a team (dunno too if they work finish when someone signs the contract, so good luck to both her and the team she will integrate)
Having gone software engineer to team coach to product manager in my career, the tensions here hit home. I'm thankful every day I work in a small organization that "gets it", including the CEO.
Rewinding to the start of part one, the update meeting where the sales representative has all the figures and the engineering person stumbles and technobabbles. To me, this is where product and engineering need to have a pact. They must steer the conversation to be about the bets placed, with the outcomes, impacts, and end user value delivered so far. There's a responsibility there to make the conversation about what matters, otherwise the group will decide what matters for you, and pull in the frameworks to measure it.
I am generally HIGHLY resistant to paying for content subscriptions. It’s not that I am cheap; it’s just that I love / want so much variety in content that it can become extremely expensive. But, a week ago, I took a paid annual subscription to Kent Beck’s Substack because I have so much regard for his thinking and writing over the years. This two-part post is worth the entire year’s subscription! Thank you, Kent!
Now, I guess I am going to have to ante up for Gergerly’s paid subscription, too. He has developed some equally awesome content.
Changing the perspectives so that software development & delivery is viewed more as a series of exploratory attempts is a massive organizational culture shift and likewise means a radical change in perspective for most managers. [To be clear, it's a perspective I totally agree with.] The word "engineering" has for so long been synonymous with cut-and-dried, precisely measurable things. I suspect that word in "software engineering" is an obstacle in changing the perspective. I don't have an alternative and would love to hear ideas. Even if the alternatives are 'only' used to introduce and explain these issues to C-levels and managers, etc.
I've been along for this ride in software development ("engineering") since the late 80s, always with a strong interest in the very human side of things. (To the extent that I also majored in psychology.)
Humans and our systems (cultures, societies, communities) are what makes software development complex: the interactions, communications, and -- as these 2 posts show -- the very difficult realm of performance and rewards (which means some sort of measures) are, in my not-so-humble opinion, vastly more complex and difficult than any technical endeavor.
"You get what you measure" is such a fundamental mantra in science -- and especially psychology. But C-level folks and managers are most often measured on short-term metrics (e.g. quarterly stock value or profits) and so that is what they are incentivized to maximize. (Not to mention many don't know of or understand this fundamental concept.)
In reading this, I am struck by other areas where performance is also difficult. For example: measuring the performance of teachers in US public schools. (Teacher performance is affected by a large number of forces beyond their control, namely the capabilities of each individual student that happen to land in their classroom in any given year, the number of students they must teach, etc.) Or measuring the performance of individual health care workers (ex: nurses in a hospital ward).
Extreme examples abound of gaming the system because of ill-thought out rewards. Ex: The US bank Wells Fargo rewarded employees for opening new accounts. So some employees figured out how to open accounts for real people without their knowledge or consent. (And were ultimately discovered and sued in court.) In Georgia, teachers were rewarded based on standardized testing scores. Teachers altered student answers on tests in order to improve test scores. This scheme was uncovered and ultimately many were taken to court. https://en.wikipedia.org/wiki/Atlanta_Public_Schools_cheating_scandal
Those examples show just how much pressure there is to both come up with metrics and how often that leads to really awful metrics and thus awful incentives and awful results.
- - -
[As a slight aside, I am curious if there are examples other than sports that can also be used as examples where there are metrics for individuals, individual contribution to team performance, _and_ overall team performance. Perhaps there aren't examples that would communicate the ideas so clearly. Perhaps there aren't examples that are so universally understood.]
It seems like the failures of the “Agile Industrial Complex” is worth a mention when we discuss measurement in today’s world.
Thought a great article series, it would look better without phrases such as "My test-driven development book...." because it can come off as: "I'm writing this article to sell my book" and not to make a sincere point about the issue, in translation an ad. For someone who comes without this knowledge (maybe a C-level exec?) and who read also the McKinsey statement (who probably gave the statement to sell their services) could say: "They just want to sell their stuff and get money, but don't actually care about helping improve our companies outcome and impact", as does the sales person gaming the system in the article. Kent is someone who has immensely contributed to the improvement of software engineering, and this debate could also improve it further, but adding stuff that could give off hints that it's done for personal gains, even if it was in part, could damage the whole argument.
Seems to me individual “performance” should be measured on 2 axes: 1) team performance; 2) individual growth. Ideally, these should be “managed” by different people.
1) Everyone belongs to a team. Day-to-day the team holds each other accountable to performance, ie achieving outcomes or impact. Optimally one can draw a line from team outcome and organizational priorities. This measurement is pretty objective and is useful for lessening “gaming” (less likely for all team members to be in on it).
Teams are formed based on the requirements of the mission, which will vary from low uncertainty (requiring less differentiated skills) to high uncertainty or complexity (requiring interdisciplinary or cross functional skills).
2) Career advancement, mentoring, and skills development is managed separately by someone who is not responsible for the team outcomes. This gives the individual the opportunity to be the best they can be.
I don't know all the details of how recruitment and sales departments normally work but, I think it is sure there's an impact of their work and decisions over the engineering team, and probably that it is not being measured. I remember how in one of the companies I worked (sadly they worked under a quite competitive+toxic scheme) the sales department was putting in compromise continuously the engineering department promising to the clients things that were far more expensive to achieve than they were offering, and without asking for a second opinion, just to make the selling and put a star on their achievement's list. Also, probably it is not being measured how the choices done from a recruiter impact the performance of a team (dunno too if they work finish when someone signs the contract, so good luck to both her and the team she will integrate)
Having gone software engineer to team coach to product manager in my career, the tensions here hit home. I'm thankful every day I work in a small organization that "gets it", including the CEO.
Rewinding to the start of part one, the update meeting where the sales representative has all the figures and the engineering person stumbles and technobabbles. To me, this is where product and engineering need to have a pact. They must steer the conversation to be about the bets placed, with the outcomes, impacts, and end user value delivered so far. There's a responsibility there to make the conversation about what matters, otherwise the group will decide what matters for you, and pull in the frameworks to measure it.
TLDR;
Be suspicious of anyone claiming to measure developer productivity.
I am 100% pro-accountability. Weekly delivery of customer-appreciated value is the best accountability, the most aligned, the least distorting.