While I personally view the “Are developers being productive?” question as a smell of other problems within an organization (lack of trust, poor alignment, poor transparency), it is not unreasonable to want to know about the efficiency of the production line (as opposed to individuals - which I don't think can be measured outside of the context of a team when talking about software). For that, I personally feel that the DORA metrics do a good job of demonstrating the ability of a team to deliver "stuff" that is "stable".
A problem I see with the models that I have been involved with is that doesn't answer the question about the value of the "stuff". Doing what you are told quickly and efficiently (because, let's be realistic, nobody outside of a team REALLY cares about technical debt, they just want it faster) is great, but it won't answer the CFO's real question. Are we getting value for the investment? We had fewer defects this quarter! So? Did profits go up? In the complex systems that are most organizations, finding the correlations between effort and outcome can be difficult, which is why we fall back on simpler ideas. Tracking things like Cycle Time are indicators of problems in the line, but they do not tell you if the machine is making the right widgets. To understand the value we need to measure things like revenue, costs, customer retention/satisfaction, ...
Simply align development to actual business objectives and voila!
"We're losing market share in the 20-28 age bracket, WDYD?"
Let's start solving these problems rather than problems of "how do we ship faster?" "what architectural pattern gives us one more option to change later?" etc
Thank you for sharing your thoughts. Do you think we will have less discussions about engineering performance if we continously deliver value to the customers?
That's part of it. The thing about doing a better job, though, is that people will come to expect it. That's not necessarily bad, just don't expect to ever be "good enough". Yesterday's "good" is today's "opportunity for improvement".
Amen! Take, for example, the typical annual performance evaluation at many companies. It's rarely acceptable to rate a developer as Excellent/Exceeds-Expectations/9-or-10-out-of-10 (or whatever value represents the highest rating) across the board, no matter how great he or she is. Even less so to do it in consecutive years.
One of my most successful students (now at the tippy top at Meta) consistently went from exceeds/promo to meets most to exceeds/promo on alternating reviews. For years.
If the promotion comes with the excellent review, I think that pattern makes sense and represents growth. I just don't know how many of us that condition applies to. In 25+ years I've not worked at a place where promotion was an automatic result of an excellent review, and where promotion of developers also allowed us to stay away from the dreaded Management Career Path. Maybe my experience is unique.
So it seems that "the best" executives are "the best" because they already understand:
- we don't know much about how to predict even gross profit margin from the flow of features (_Waltzing with Bears_)
- the flow of features (and therefore gross profit margin) doesn't scale linearly with staffing levels (_Peopleware_, _Psychology of Computer Programming_, ... take your pick)
- the flow of features is likely _at best_ artificially inflated and unsustainable over the long term, because incentives tend to push it in that direction, and Winter is Coming, but we can't forecast when (_Deadline_, _Slack_, ...)
I'm certainly naive, because I don't get to talk to executives much, but those things seem pretty settled and evident to me. Clearly, I was influenced by DeMarco and Lister early in my career, but I presume that other people learned similar lessons from other sources.
This seems to lead me in the direction of not bothering at all to measure productivity, which makes the issue for me less of a tradeoff (sometimes it's worth it and sometimes not) and more of a default anti-recommendation. Assume not only that you don't need it, but also that it would be more trouble than it's worth if you tried. In that way, it reminds me of the #NoEstimates debates of the last decade. If you're trying to measure productivity, then you've probably lost sight of the goal. Don't merely look where the light shines brightest.
It's almost as though we would benefit from tighter integration and greater collaboration between the people asking for features and the people delivering them....
But srsly, one of the toughest retorts from the #NoEstimates crowd wasn't the cries of "You're shirking your responsibilities by not estimating cost!" but rather "We can estimate cost pretty well, so what's your excuse?!" I find it tough because they're heavily incentivized to lie about their accuracy---or worse, to unjustified belief about their accuracy. I imagine it's the same when trying to evaluate the ROI of any non-trivially-sized Engineering group that operates as an independent unit within an enterprise.
I've been reading a lot lately on this topic and agree with a lot of the points shared in this forum and also on the Pragmatic Engineer, but what I'm still wrestling with is where to start and how to show improvement. I get Goodharts Law and don't wanting things to be gamed. We've seen it at my company. I want things to improve and people to be able to relish and see the improvements. I get there's quantitative and qualitative aspects, but in the end how do you best show that things are improving? It feels great to remove roadblocks that are impacting a team's flow, but how do you measure the impact and show it all the way to top executives? I can show all the improvements we've done over time, but much harder to quantify the impact they've had if that makes sense.
Would love to meet with some people passionate about this topic. Reach out if you're interested in forming some sort of cohort. kulikev@chrobinson.com.
When I was forced to fill out some forms about performance and so on, I asked myself: I'm doing that because I want to improve my engineering skills or because I'm trying to show up for someone that I deserve this position?
In other words, I'm doing my best or trying to do that just because I'm being watched by my peers? This sounds odd to me, but it's what happens in companies
In my humble opinion, being motivated and productive in a work environment is related to:
1) The team. How much they are free to push some creative things to production
2) The leader: Some leaders, for pressure or something else. Avoid giving too much freedom to the engineers and keep them focused on pushing tasks for the next phase. In the end, also the lazy guys like to be involved in the creativity process and don't just fix bugs
I have been thinking about this for a while.
While I personally view the “Are developers being productive?” question as a smell of other problems within an organization (lack of trust, poor alignment, poor transparency), it is not unreasonable to want to know about the efficiency of the production line (as opposed to individuals - which I don't think can be measured outside of the context of a team when talking about software). For that, I personally feel that the DORA metrics do a good job of demonstrating the ability of a team to deliver "stuff" that is "stable".
A problem I see with the models that I have been involved with is that doesn't answer the question about the value of the "stuff". Doing what you are told quickly and efficiently (because, let's be realistic, nobody outside of a team REALLY cares about technical debt, they just want it faster) is great, but it won't answer the CFO's real question. Are we getting value for the investment? We had fewer defects this quarter! So? Did profits go up? In the complex systems that are most organizations, finding the correlations between effort and outcome can be difficult, which is why we fall back on simpler ideas. Tracking things like Cycle Time are indicators of problems in the line, but they do not tell you if the machine is making the right widgets. To understand the value we need to measure things like revenue, costs, customer retention/satisfaction, ...
Simply align development to actual business objectives and voila!
"We're losing market share in the 20-28 age bracket, WDYD?"
Let's start solving these problems rather than problems of "how do we ship faster?" "what architectural pattern gives us one more option to change later?" etc
Thank you for sharing your thoughts. Do you think we will have less discussions about engineering performance if we continously deliver value to the customers?
That's part of it. The thing about doing a better job, though, is that people will come to expect it. That's not necessarily bad, just don't expect to ever be "good enough". Yesterday's "good" is today's "opportunity for improvement".
Amen! Take, for example, the typical annual performance evaluation at many companies. It's rarely acceptable to rate a developer as Excellent/Exceeds-Expectations/9-or-10-out-of-10 (or whatever value represents the highest rating) across the board, no matter how great he or she is. Even less so to do it in consecutive years.
One of my most successful students (now at the tippy top at Meta) consistently went from exceeds/promo to meets most to exceeds/promo on alternating reviews. For years.
If the promotion comes with the excellent review, I think that pattern makes sense and represents growth. I just don't know how many of us that condition applies to. In 25+ years I've not worked at a place where promotion was an automatic result of an excellent review, and where promotion of developers also allowed us to stay away from the dreaded Management Career Path. Maybe my experience is unique.
So it seems that "the best" executives are "the best" because they already understand:
- we don't know much about how to predict even gross profit margin from the flow of features (_Waltzing with Bears_)
- the flow of features (and therefore gross profit margin) doesn't scale linearly with staffing levels (_Peopleware_, _Psychology of Computer Programming_, ... take your pick)
- the flow of features is likely _at best_ artificially inflated and unsustainable over the long term, because incentives tend to push it in that direction, and Winter is Coming, but we can't forecast when (_Deadline_, _Slack_, ...)
I'm certainly naive, because I don't get to talk to executives much, but those things seem pretty settled and evident to me. Clearly, I was influenced by DeMarco and Lister early in my career, but I presume that other people learned similar lessons from other sources.
This seems to lead me in the direction of not bothering at all to measure productivity, which makes the issue for me less of a tradeoff (sometimes it's worth it and sometimes not) and more of a default anti-recommendation. Assume not only that you don't need it, but also that it would be more trouble than it's worth if you tried. In that way, it reminds me of the #NoEstimates debates of the last decade. If you're trying to measure productivity, then you've probably lost sight of the goal. Don't merely look where the light shines brightest.
It's almost as though we would benefit from tighter integration and greater collaboration between the people asking for features and the people delivering them....
But srsly, one of the toughest retorts from the #NoEstimates crowd wasn't the cries of "You're shirking your responsibilities by not estimating cost!" but rather "We can estimate cost pretty well, so what's your excuse?!" I find it tough because they're heavily incentivized to lie about their accuracy---or worse, to unjustified belief about their accuracy. I imagine it's the same when trying to evaluate the ROI of any non-trivially-sized Engineering group that operates as an independent unit within an enterprise.
I think I need a nap.
I've been reading a lot lately on this topic and agree with a lot of the points shared in this forum and also on the Pragmatic Engineer, but what I'm still wrestling with is where to start and how to show improvement. I get Goodharts Law and don't wanting things to be gamed. We've seen it at my company. I want things to improve and people to be able to relish and see the improvements. I get there's quantitative and qualitative aspects, but in the end how do you best show that things are improving? It feels great to remove roadblocks that are impacting a team's flow, but how do you measure the impact and show it all the way to top executives? I can show all the improvements we've done over time, but much harder to quantify the impact they've had if that makes sense.
Would love to meet with some people passionate about this topic. Reach out if you're interested in forming some sort of cohort. kulikev@chrobinson.com.
When I was forced to fill out some forms about performance and so on, I asked myself: I'm doing that because I want to improve my engineering skills or because I'm trying to show up for someone that I deserve this position?
In other words, I'm doing my best or trying to do that just because I'm being watched by my peers? This sounds odd to me, but it's what happens in companies
In my humble opinion, being motivated and productive in a work environment is related to:
1) The team. How much they are free to push some creative things to production
2) The leader: Some leaders, for pressure or something else. Avoid giving too much freedom to the engineers and keep them focused on pushing tasks for the next phase. In the end, also the lazy guys like to be involved in the creativity process and don't just fix bugs
Simplify it a lot here, but I'm not trying to push some golden rule, just sharing some personal thoughts
How doth the shiny crocodile improve her shiny scales? (question for robo Ken)