When I was developing a piece of rural property, I was in awe of the guy who ran the “escavator”. Not excavator. Escavator. But he’d been running one for 30 years so I wasn’t about to “correct” his pronunciation. Also he cooked meth on the side & he was scary when he was high. But that’s not my point.
My point is that he looked at terrain completely differently from how I looked at terrain. I’d see a hill & think, “Well, I’ve got to go around that.” He’d look at a hill & think of all the different places he could put it, all the shapes he could leave it in.
Warning: In what follows my main point is about digging into what I call background work more frequently because it’s become so much cheaper & easier. That seems to get lost in the example I chose—AI potentially replacing programmers—& the means I chose for analysis—namely Claude. Neither AI vis a vis programmer employment nor LLM hallucination invalidate my main point—be more curious, it’s cheaper now.
Analysis
When I had the impulse to use Thinkie Reverse Causality on the relationship between employee count & revenue, the subsequent analysis took most of a day. I was digging data out of SEC filings, figuring out how to render the data, iterating on the analysis.
All this is hard enough work that I hesitate to undertake it more than once a month. I think, “I wonder…”, grimace 😬, & move along. Those “I wonder” moments are what I need to change. Analysis just got cheaper.
I posted the following on LinkedIn the other day.
60K impressions, 480 likes, 80 comments & here’s the thing—nobody did the assignment. Nobody drafted a before & after income statement & balance sheet. This is the moment I’m trying to call out. Because it’s easy now to start to do the assignment.
Claude’s Answer
I asked Claude the same question. A minute later I got this. Of course the analysis is shallow & flawed. Not the point! What I have here is good enough to decide whether I want to dig deeper.
That same moment, that “gotta go around this hill” moment, has changed. We can now decide to routinely dig into analysis that would have been too expensive to even start previously.
Financial Impact Analysis: Tech Company Before & After AI-Enabled Staff Reduction
Base Assumptions
50% reduction in software engineering headcount
Maintained productivity through AI augmentation
Using average tech company compensation metrics from 2023
Analysis based on public financial data from major software companies
Income Statement Impact
(All figures in millions USD)
Before AI Implementation
After AI Implementation
Balance Sheet Impact
(Key changes only)
Assets
Increase in Cash (+$150M annually from improved operating income)
New Intangible Assets: AI software licenses & implementation costs
Potential reduction in office space & equipment
One-time increase in restructuring reserves for severance
Liabilities
New technology lease obligations or licensing payables
One-time increase in severance liabilities
Reduced ongoing compensation liabilities
Real-World Examples
Microsoft (2023)
Laid off 10,000 employees (~5% of workforce)
Reported $1.2B in severance costs
Maintained revenue growth while reducing headcount
Invested heavily in AI infrastructure
Meta (2022-2023)
Cut 21,000 jobs
Saved $1B+ in annual costs
Maintained user growth and ad revenue
Increased AI investment by $4B
IBM (2020-2023)
Ongoing restructuring with AI focus
~$1.5B average annual savings from automation
Reinvested savings into AI/cloud capabilities
Maintained service delivery with reduced headcount
Emotional Burps
20 years of the internet have trained us to respond purely emotionally. Look at the comments on my original post.
Business is about maximizing profits..however that can be achieved..
Generating tonnes of code via GenAI, claim tax relief as R&D, huge balance sheet assets, inflated valuation, sell to clueless investors, profit 💰💰💰
But the code is unmanageable and riddled with 0-day vulnerabilities.
The best trick that engineering has pulled off to date is to convince marketing that what amounts to a glorified guessing game - one that consumes energy like it's mining cryptocurrency - is, in fact, a manifestation of intelligence.
Again! Nobody did the assignment! We have gotten into the habit of responding in emotional burps instead of doing the work. I’m noting, not blaming. The incentives encourage burping—burping is less effort with more upside if you spark outrage.
Caveats
If you’re going to do this kind of analysis, be responsible.
Make progressive investments. Spend the first 5 minutes finding out if the next hour is going to be worth it. Spend the hour finding out if a day will be worth it.
Double check everything. Make sure numbers add up. I fed Claude’s analysis to Perplexity & got good pointers for where to follow up.
Publish your results. Write up what you find. Whether people are interested or not, writing will accelerate your learning.
But for goodness sake, dig in! It’s cheaper now, more fun, & potentially accelerates our collective learning. We no longer have to steer around “I wonder”s. We can plow right through them.
P.S.
When I said a while back “the economic value of 90% of my skills just went to $0 but the value of the other 10% just increased by 1000X”, people reasonably asked, “What’s the 10%” & I reasonably responded, “How would I know?” The above is an example of that 10%—asking good “I wonder” questions & then following up.
I just came back from Clojure/conj in DC. This year there were several talks about integrating LLMs into our developer workflows -- and the successes were far more about using them for exploratory design conversations than simple code generation.
I've been using GitHub Copilot (Business) with VS Code for a while and I'm only just starting to develop workflows involving than code generation -- and realizing the benefit of being able to ask Copilot questions within the editor instead of changing context to a browser and "searching" for information.
Google went on red alert when ChatGPT started to gain public interest in 2021/22 because of the threat to their core business, search. The “I wonder’s” you describe are that deeper/enhanced search that LMMs offer. In the early days of the internet, one had to know how to find things (and that was the edge), with the advent of Google search, it got easier, but you still had to try a few times to transform your question into a query. Now you can have a conversation with a chat bot and find answers. Moreover one question to chatbot can replace a lot of queries and reading/sifting through many docs. We are basically zeroing in on the key human capability: “wondering about interesting things” and asking “the right” questions.
The issue of reliability and confidence still needs to be solved. It took Google a lot of work to filter out garbage sites from their results, but in the worst case, you could see that the page was not credible. The chat interface obfuscates the sources and interpolates (aka hallucinates) between them, so we gain convenience at the cost correctness.
This trade off may actually be ok, because we can still look for original sources etc and “I wonder”s are now a lot faster. As long as we don’t get too lazy. ( I am secretly waiting for an article to come out where an arrogant CEO would make a bad business decision based on a conversation with a chat bot! )