My Fitbit Buzzed and I Understood Enshittification
You can't measure your way to delight
My Fitbit started buzzing at me a year ago. “It looks like you’re exercising.”
Yeah. No shit. I’m walking. I know I’m exercising. I’m the one doing it.
I didn’t ask for this notification. I don’t want this notification. Nobody wants to be told what they’re already doing. And yet, here we are.
I was annoyed for about thirty seconds. Then I started thinking about what it must be like to be a product developer inside Fitbit. That’s the advantage of walking as exercise. Time to think.
The View From Inside
You’re a product owner. You have a feature to ship: “Automatic Exercise Detection.” It’s a reasonable feature. The watch notices when you start moving in exercise-like ways and begins tracking.
But here’s your problem: how do you know the feature is working? How do you prove it’s valuable? How do you keep your job?
You need metrics. You need numbers that go up.
So you add a notification. “It looks like you’re exercising.” Now you can measure engagement. Users are responding to your feature. They’re seeing it. They’re interacting with it. Your numbers go up. Your feature is a success. You get to stay employed.
Then users get annoyed. Some of them complain. So you add a setting to turn it off. But you default it to “on” because that keeps your numbers up. Most users won’t find the setting. Most users will just... tolerate it.
I can’t blame this product owner. They’re playing the only game available to them. The company set up incentives that reward exactly this behavior. What else were they supposed to do?
This Is The Mechanism
I’ve been thinking about this pattern ever since Cory Doctorow coined “enshittification” to describe how platforms decay. But I don’t think we’ve been precise enough about the mechanism.
It’s not that companies decide to make their products worse. Nobody wakes up thinking, “Let’s annoy our users today.” The mechanism is subtler and more tragic:
Individual contributors need to demonstrate value
Demonstrating value requires metrics
Metrics create incentives
Incentives shape behavior
Behavior optimizes for the metric, not the user
Each step is locally rational. Each person is doing their job. And the cumulative result is a product that gets progressively more hostile to the people using it.
Here’s another example. In most messaging apps, there’s a button to call someone. This button is conveniently located right where you might accidentally tap it. You’re scrolling through a conversation, your thumb grazes the wrong spot, and suddenly you’re calling your ex at 2 AM.
Why is that button there? Why is it so easy to hit accidentally?
Because someone’s job depends on “calls initiated” going up. If the button were harder to find, fewer people would use it. Fewer people using it means lower numbers. Lower numbers means maybe you don’t get to keep working on this feature. Maybe you don’t get to keep working here at all.
So the button stays prominent. And users keep accidentally calling people they didn’t mean to call.
The Metrics Arms Race
Some folks suggest the solution is more metrics. Add a “calls immediately hung up” counter. Subtract it from “calls initiated.” Now you’re measuring meaningful calls!
You’ll never win this race.
To keep their jobs, people will be extremely clever about gaming whatever measurement system you create. Add a metric, they’ll optimize around it. Add two metrics, they’ll find the corner cases. Add ten metrics, and now you’ve created a system so complex that nobody understands what “good” looks like anymore.
I’ve watched teams spend more energy figuring out how to make their metrics look good than figuring out how to make their product actually good. The metrics become the product. The users become an externality.
The Alternative Nobody Wants To Hear
At some point, you have to have principles.
Not metrics. Principles.
“Don’t interrupt the user unless they explicitly asked you to.”
“Don’t put buttons where they’ll be accidentally pressed.”
“Don’t optimize for engagement when engagement means annoyance.”
These aren’t measurable. You can’t put them in a dashboard. You can’t A/B test them (well, you can, but you’ll lose to the variant that violates them, because that variant’s numbers will be better).
Principles require someone to say: “We just don’t do this, and I don’t have to give you a reason.” And then they have to defend that line when the metrics-driven arguments come. “But the numbers show—” No. We don’t do this.
This is uncomfortable. It feels arbitrary. It feels like you’re leaving value on the table. Maybe you are.
But the alternative is a product that slowly, inexorably, turns against its users. One “engagement optimization” at a time. One “growth hack” at a time. One annoying notification at a time.
Software Design Is An Exercise In Human Relationships
I keep coming back to this phrase because it keeps being true in new ways.
Product development is also an exercise in human relationships. And when we reduce those relationships to metrics, we lose something essential. We lose the ability to say, “This would be rude.” We lose the ability to treat users like people instead of engagement vectors.
The Fitbit doesn’t know I’m annoyed. It only knows I looked at the notification. In the database, that’s engagement. In my lived experience, it’s one more small friction. One more tiny way the device that’s supposed to help me is instead demanding my attention for its own purposes.
I turned off the notification. I found the setting, buried three menus deep, and I turned it off. I’m a technical person who knows these settings exist. Most people won’t. Most people will just get buzzed, over and over, because someone at Fitbit needed their numbers to go up.
I don’t know how to fix this at the industry level. But I know this: the seemingly rational, completely legible, metrics-based product development process is how we got here. The numbers all went up. And the products all got worse.
Maybe it’s time to trust the numbers a little less and trust our sense of what’s right a little more. Even when—especially when—we can’t prove it in a dashboard.
Kent partners with a handful of companies each year on editorial collaborations, speaking, and workshops. If that's interesting, let's talk →

Very good example.
The whole point of putting "Value for the Customer" as the first principle of Lean Thinking is to remind the company leaders that they should constantly fight against that inevitable tendency when organisations scale and become more bureaucratic.
Concrete examples of what this can look like ina. company:
- the "Obeya", a room with a big wall where the Chief Engineer / Chief Product Officer describes what good looks like for their product. It is a way to regularly stimulate those conversations and go deeper on the purpose of the product strategy than "maximise these metrics".
- the "gemba" visits from top leadership, regular visits to actually go and see what teams on the ground are doing, without judgement, to initiate those conversations. "What are you working on? The setting to turn off the vibrations? Interesting, can you tell me more?"
Thank you for the article.
I don't think this can be written by a lot of people even though I think a lot had a similar idea.
Stating that metrics (the detailed, user engagement kind) actually degrades products could be suicidal in tech.
As engineering minded people, it's very comforting to think that we can measure everything and have a number attached to every little thing, when in fact the most important things are very hard or even not possible to measure.
What is your take on metrics - they are fine if we add principles to them? They are good for operational efficiency?