8 Comments

I just came back from Clojure/conj in DC. This year there were several talks about integrating LLMs into our developer workflows -- and the successes were far more about using them for exploratory design conversations than simple code generation.

I've been using GitHub Copilot (Business) with VS Code for a while and I'm only just starting to develop workflows involving than code generation -- and realizing the benefit of being able to ask Copilot questions within the editor instead of changing context to a browser and "searching" for information.

Expand full comment

Google went on red alert when ChatGPT started to gain public interest in 2021/22 because of the threat to their core business, search. The “I wonder’s” you describe are that deeper/enhanced search that LMMs offer. In the early days of the internet, one had to know how to find things (and that was the edge), with the advent of Google search, it got easier, but you still had to try a few times to transform your question into a query. Now you can have a conversation with a chat bot and find answers. Moreover one question to chatbot can replace a lot of queries and reading/sifting through many docs. We are basically zeroing in on the key human capability: “wondering about interesting things” and asking “the right” questions.

The issue of reliability and confidence still needs to be solved. It took Google a lot of work to filter out garbage sites from their results, but in the worst case, you could see that the page was not credible. The chat interface obfuscates the sources and interpolates (aka hallucinates) between them, so we gain convenience at the cost correctness.

This trade off may actually be ok, because we can still look for original sources etc and “I wonder”s are now a lot faster. As long as we don’t get too lazy. ( I am secretly waiting for an article to come out where an arrogant CEO would make a bad business decision based on a conversation with a chat bot! )

Expand full comment

Killer stuff man, thank you!

Expand full comment

The hidden picture that these tech companies won't talk about: AI automation and reduction in force will have diminishing returns in the longrun. Exponential costs to keep training the models, purchasing replacement GPUs (they burn out quickly) building new data centers and running facilities like three mile island are going to far outstrip any revenue they pull in from their investments and consolidation efforts. That's why they're all on a hail mary quest for "AGI". If AGI remains elusive, they're all in a wild race to the bottom.

Expand full comment

Yes, and... my point stands that analysis has become cheap & learning to analyze on a whi, has become a valuable skill.

Expand full comment

Okay fine, you can keep your point.

Expand full comment

I think your assumptions need adjustments - why would a business stop at replacing engineers with AI? You'll get more out of AI if you expand it to other functions. Marketing, HR, Management, Bookkeeping, etc. The potential for cost reductions are enormous.

$1000

($100)

-------

$900

one time and ongoing like licensing, LLM training are negligible in the long run.

Expand full comment

Great! Use the LLM replies as prompts for further thinking, just like you did here.

Expand full comment