33 Comments

Exploristan stage ChatGPT questions:

- How fast is the tech getting better?

- What ceiling is there on how good the tech can get?

As you say yourself, “it has gotten obviously worse” and “That’s a sign that it’s bumping up against some natural limit which OpenAI may or may not be able to overcome.”

Well, …

OpenAI has said that they cannot make it much better, that they have reached the limits.

Also, early in the process, experts predicted that if LLMs were widely used to generate internet content, they would "poison themselves" by reading their hallucinatory outputs as inputs, resulting in their results getting worse and worse, until they become useless. This was demonstrated in tests. And we've been seeing it, in practice.

So I think this would suggest, from the questions you said we should focus on at this stage in the technology, that LLMs are simply not likely to really "take off" as a viable new technology "wave."

Expand full comment

Maybe. I still think personalized models have a chance to perform well. It’s relatively cheap for me to experiment.

Expand full comment

As skeptical as I am about ChatGPT et al in general, I agree with you that tuned models, based on limited, curated input seem to at least have the potential to do much better. I'll be interested to see how Rent-a-Kent evolves.

Expand full comment

I made a little screencast series of some of my ChatGPT experiments --> https://www.youtube.com/playlist?list=PL4Q4HssKcxYuwbVAgVqwM5od3yLtg9NM0

Expand full comment

Just brilliant Kent. Awesome post.

Expand full comment

I've started exploring copilot this week for the first time after my company gave the ok

It look a little while to integrate naturally into my flow as a vim-bindings user and I think it's helping me 5-10% more efficient when programming as it's acting like a better auto complete.

When writing tests or using a builder is seems to do roughly what you want, which is uncanny.

The downside is when it suddenly spits out loads of code at you and you have no real idea what is doing.

For juniors that problem will be even worse!

However, I've found the juniors on my team have been chatting with ChatGPT like they would a senior engineer. That's probably more help.

Expand full comment

Those "chats with a senior developer" is one scenario I hope to address with Rent-a-Kent. Sometimes my model's answers are much better than standard ChatGPT's, at least by my standards.

Expand full comment

Anybody playing with TDD in ChatGPT yet?

My results so far have been great. Much higher code quality output from the model if you input test cases it has to solve for, and more likely to be correct on the first try (although I never accept the first result!).

Expand full comment

Very interesting, can you elaborate on your process here? How do you do TDD in ChatGPT?

Expand full comment

AI talk is creepy and most of the art has a strange unnatural lighting leaning toward the red spectrum. Yet I hope AI takes most jobs sooner than later. Then I can finally receive my UBI check and work more on my creative projects and maybe even get to know more neighbors in my community! UBI via taxing the mega rich will necessary to avoid chaos in society. However, for most men in particular will have hard time adjusting to this new AI reality in that they develop a sense of worth from their work status. This will be the challenge moving forward.

Expand full comment

I think it may be useful to focus in and explore the costs more fully. You mention that the results of your exploration are very cheap and could potentially reduce the value of a lot of the work you currently do, but are these results actually cheap? What's the real ROI after factoring things like R&D costs, the subsidy on queries that's paid for by investor money, and various externalities like use of copyright material, or the way that the LLM's inherent lack of attribution means that audiences don't have a means to connect with and engage in a cultural feedback loop with the work that underlies any given ChatGPT response?

Expand full comment

How do you behave differently given your evaluation of those costs?

Expand full comment

LLMs seem to cost a lot to train, maintain, and host vs. the things they'd be used in place of, which in a lot of cases seems to boil down to "search" and "asking questions on forums". The value produced seems low to me compared to the compute power used, so in terms of my own behavior, I'm focusing my professional development time on software writing fundamentals. For now, I'm treating LLMs the same way that I treated blockchain and NFTs. While LLMs have more going on under the hood than crypto, their market seems to follow a similar pattern of generating investor hype through speculation on a future value that it's unlikely to be able to deliver and that seems outsized in comparison to the low or negative ROI that the technology generates today. After adding in that a significant part of its value is tied up in legitimate legal concerns around copyright infringement and ethical concerns about plagiarism, it feels like a very tall order to produce something valuable enough to outweigh its end to end costs.

Expand full comment

Replacing Google or other search engines with ChatGPT seems suicidal to me. There is no attribution and the AI algorithms blow your mind. And there are experts who warn of the degradation of the prompts: when doing widespread scrapping, ChatGPT introduces data that it itself has generated, reinforcing errors and bias. On the other hand, image generation AIs operate on a massive violation of copyright rights. In my opinion, the hype of AI is like that of Blockchain, and it may (fortunately) last fewer years.

Expand full comment

Thank you for the feedback. You are wise to be skeptical & aware of the source when looking at information from a model. Here’s the thing, though, the information I get from Google is also twisted contrary to my interests. If I want to learn this new landscape, I need to walk it. So I choose to walk it with my eyes open.

Expand full comment

But how will you know if the information ChatGPT presents you is a hallucination or has incorrect data?

Is walking with your eyes open having faith in a tool that you do not cite and that has biases, again, more than documented?

Expand full comment

I use the same tools I use with all the rest of the information I get off the internet. ChatGPT has vulnerabilities but so does every source. Part of what we’re all learning is how to navigate biased narrators.

Expand full comment

I like using more than one app. I compare and contrast. Like search engines when they started. Like help function in Windows. Those were always terrible until they weren't. Something jiggers or it goes the way of the Dodo. We shall see...

Expand full comment

The question is not how to search for information on the internet, but how to explore ai. If you don’t want to explore ai, then do whatever.

Expand full comment

I think generally, as seniors in our fields, we should know when something just doesn't smell right and needs a deeper investigation. For anything serious, like for a client, it's proper investigation all the way.

Expand full comment

I think just like any tool, you need to validate from more than one source. Not every Google search yielded valid information in the past and the same holds true here. This is especially dependent on the usage and subject. For example, I’ve used it to write narratives for role playing adventures in Star Wars. I enjoy that it can pull in subjects like language or astrophysics into a single chat where I worry a bit less about accuracy vs adding it for medical or mental health advice. Context is key.

Expand full comment

When people figured out Googles search algorithm, many results just became BS.

Expand full comment

By adding I meant asking

Expand full comment

Other AI like MS Copilot and Bard provide attribution links I want to say? I’ve tried so many I forget 😅

Expand full comment

MS Bing Chat provides attribution links -- and it's a great "sanity check" on the responses because you get a sense of where it's drawn the information from.

I switched from Google to Bing some years ago because Google's results seemed to be getting worse and Bing seemed to fare better (perhaps because people were optimizing content more for Google, and so it featured more SEO-tuned nonsense?).

I'm slowly "forcing" myself to use Bing Chat instead of regular Bing search these days and where I find it really shines is when I want to follow up with additional questions, since it retains the context of previous searches in the same session, so those additional questions can be much shorter and can use pronouns to refer back to previous questions and answers. It's also much better at summarizing a topic, so I don't have to skim multiple results to form my own summary, which is quite a time-saver sometimes.

Expand full comment

Loved the post, Ken. Recently I read about "active learning" vs "passive learning". I can see how exploration approach is fundamental to passive learning.

As a side note, I am having fun trying out this GodMode app, where I can enter a prompt and see several LLMs (including my own local one) respond differently. And I see my instinct to type into google is diminishing rapidly.

https://github.com/smol-ai/GodMode

Expand full comment

I don't think AI like ChatGPT will be a problem if used by experienced developers to simply speed up development time. The big issue with it is junior developers using it as a crutch to write code that they do not fully understand. When any of that code breaks, what will they do? My guess is go to StackOverflow, or ask ChatGTP.

Expand full comment

What advice would you give to individuals who are struggling to shift from ExtractLand to Exploristan in order to keep up with the ever-changing technological landscape?

Expand full comment

Be prepared to emotionally treat efficient failure like you currently treat success. Stop trying to make experiments "succeed". Be prepared to try crazy ideas if they are cheap enough. And don't bother with keeping up. It's more fun to stay ahead.

Expand full comment

Some quick "thinkies" on your 10%...

Use ai to spend less time on writing. I've used ai to clean up my writing...throw thoughts down as quickly then dump into chatGPT to clean it up. It makes it stale though so since you already have a bot trained in your voice, that's even better. Maybe that could help clean up your writing and free you up for more personal interactions in comments...a pivot for sure!

Create a ln ai-driven XP companion - guides the dev through the cycle of (test-code-refactor) automatically starting new tests as soon as the first one passes, suggesting refactorings, etc.

Expand full comment

Great post, Kent. I recall your Explore/Expand/Extract from a brief talk you did at a Lean Startup conference. One way to *bound* experiments is to use “Opportunity Discovery” by starting with existing products and examine the needs they address and for whom. Experiment for How might AI address those specific needs better? Tech changes over time, and who has specific need might evolve, but not the needs themselves.

Expand full comment

I’ve kind of replaced transitional search with AI only in certain cases. I am now thinking going more all in would be helpful in the case of more experiments as you called out. Looking forward to more of your observations!

Expand full comment

I did the same. If I want a pretty exact answer to something I don't use Google anymore. I don't want results, I want an answer. So far, so good.

Expand full comment