49 Comments
Apr 19, 2023Liked by Kent Beck

Is it just me or the Chat GPT's writing was clearly distinguishable? I have created content with various writing styles: humorous, formal, informal. It creates very repetitive content. It might create a sentence, but it would not understand how much to expand on it. Like, at what point does the reader want more details? What will make the story more interesting?

Expand full comment

agreed. chatgpt is mid but in some cases that's a good thing. if you are trying to write to be creative, you won't use chatgpt because it will write the most common and obvious sentence, not the most creative. the plus to this is that what it produces is usually easier to understand.

take, for instance, the authors sentence "The differential value of being better at putting words in a row just dropped to nothing." is a bit tortured. "differential value" and "putting words in a row" are more creative ways of saying "value" and "writing" but at the cost of causing people to pause and truly try to understand what the author is saying. perhaps that's a good thing as it forces people to vest more energy or give up.

i had chatgpt rewrite that sentence and it came up with "The relative importance of being better at stringing words together just dropped to nothing." which i think is both less creative and easier to parse.

Expand full comment

I would say the winning sentence would be:

"The differential value of stringing words together just dropped to nothing."

So a combination of the two. "Stringing words together" is more elegant than "putting words in a row".

Expand full comment

Well said.

Expand full comment

It maintains entropy for each paragraph, making its speech robotic and monotunous. You can prompt it to not do that, though.

Expand full comment

"In essence, a language model's output mirrors its input; the model's inherent limitations don't dictate outcomes, rather the stimulus shapes its response."

That succinct (and in my view accurate) response was achieved by linguistic constraints, guiding towards an accurate response and by defining the objective (brevity, conciseness) of the response. If I had merely asked it to respond to you, it would have said:

"I appreciate your feedback on the ChatGPT's writing capabilities. As an AI language model, I strive to generate content that is versatile and engaging across different writing styles, but I acknowledge that there may be limitations, including repetitiveness and an inability to gauge the optimal level of detail for a reader. Your insight is valuable, as it helps improve the model and address these concerns in future iterations. The technology is constantly evolving, and user feedback plays a crucial role in driving improvements. Thank you for taking the time to share your thoughts."

Expand full comment

Like, at what point does the reader want more details? What will make the story more interesting?

Why don't you ask ChatGPT about it? I believe writing naturally requires same kind of retrospection as well. In ChatGPT, you have to trigger the retrospection yourself. I was able to get much better text out of ChatGPT (GPT-4) once I started talking with it, over the text.

Expand full comment

This can be beneficial, but IMO, the writing catches up at a point. Can you suggest some prompts for the same?

Expand full comment
Apr 26, 2023Liked by Kent Beck

As counterintuitive as it seems, I'm pivoting to writing. Writing clearly and writing more.

I've been doing web development for 15 years now. For the past 4 years, 80% of my time was spent on refactoring old codebases to use more modern tech (Backbone -> React). Even though it's my job, I hope it gets automated soon. It's a long process, especially if you do this in a big organization.

I won't make any bold claims, but I hope in the near future, creating software will be 100x more accessible than it is now, and we can focus on building businesses that serve people instead of replacing date-fns with dayjs in 2500 files and hoping every component had test coverage.

Expand full comment
Apr 19, 2023·edited Apr 19, 2023Liked by Kent Beck

Did you try including "Writing style: Include punchy analogies. Include in media res. Include callback. Include humor. Include asides." in the prompt? :)

Expand full comment
author

"It was like going from a bicycle to a sports car. Sure, both can get you from point A to point B, but the latter does it faster, more efficiently, and with a lot more style. And just like that, ChatGPT had become my sports car, leaving me in the dust with my old bicycle."

Expand full comment
author

It didn't seem to get "in media res".

Expand full comment
Apr 19, 2023Liked by Kent Beck

isn't it: https://en.wikipedia.org/wiki/In_medias_res ? I wonder if the missing 's' matters?

Expand full comment
author

I'm sure it did if you were an ancient Roman. Still, better to be right than wrong.

Expand full comment
author

Not yet!

Expand full comment

I'm just getting around to reading this.

I'm not sure how much of the following to attribute to Confirmation Bias, but when I read the first of the two essays that ChatGPT generated, writing ostensibly in your voice, it read very much like Kent being held hostage and reading a letter on camera written by the perpetrators.

I truly hope we demand better from these LLMs than the kind of thing any of us might have written in our first secondary school course on How to Write An Essay.

Expand full comment

Let me add one way in which I've found ChatGPT quite helpful so far.

I co-host a podcast for bowlers. (Don't ask.) I want in particular to dive into the mental/psychological aspects of bowling at an elite competitive level. I have some genuine interest in the answers, so I tend to ask questions as if I were picking the interviewee's brain, rather than as a performance for the audience, such as it is.

Our producer uses ChatGPT to generate 25 questions. I then edit those questions in my own voice, so that I feel more comfortable asking them. I find myself repeatedly thinking, "Oh, yeah! I wanted to ask that! I nearly forgot." ChatGPT seems quite effective at enumerating what I might want to ask, but remains quite awful at phrasing the questions.

And that's how I imagine using these LLMs: for checklists and to generate ideas that I can refine, improve, and put in my own voice. And that's not nothing.

Expand full comment

Hi Kent, can you provide some example how we can leverage Chat GPT in our day to day programming job.

Expand full comment
author

That's for you to figure out. And me.

Expand full comment
Sep 7, 2023Liked by Kent Beck

Too many people seem to misunderstand the fundamental job of a developer. We aren't paid to program a machine to do a certain operation, we are paid to understand a particular problem domain and to encode solutions to problems in that domain in a format that is understandable to other people. That is something that AI will never be able to do. Are the latest incarnations of AI impressive and continuing to improve? Certainly. Do they provide new types of tools that accelerate certain specific activities? Definitely. But they only supplement the skills we already have as developers. They don't replace them.

Another great article. Keep up the great work Mr. Beck!

Expand full comment

It's interesting to see the dichotomy between the 90% of skills that may lose value and the 10% that will gain leverage. So, do you think we should focus more on developing the skills that can't be replicated by AI, or should we find ways to integrate AI into our existing skillsets? 🤔

Expand full comment
author

Integrating, definitely.

Expand full comment

I anticipated that would be your response. The primary obstacle could be finding a way to seamlessly merge AI into our current skills while still maintaining a sense of control or advantage.

Expand full comment
author

That’s why it’s valuable to do this work—precisely because it’s difficult. Good news is that nobody else has figured it out.

Expand full comment
Apr 20, 2023Liked by Kent Beck

Oh, Kent. It turns out that you only used it for text generation. (Or at least it's the main focus of your post.)

Try asking it to code, refactor, debug. It may blow your mind. And tell us if it does :)

Expand full comment

Not to be stickler, but I've always used "in medias res". I think that might be the proper way to say it.

Expand full comment
author

Not to be a stickler, but shouldn't you have said, "To be a stickler..."😜

And yes you are correct. Every day is school day.

Expand full comment

I should have. But I was being sheepish, because in today's climate insisting on proper use of Latin phrases gets interpreted as being a nuisance.

Expand full comment
author

Not by me. 🙏

Expand full comment

Dayum. It doesn't write much like you do, but dayum that's good.

Expand full comment

Beyond this I'd love to hear your thoughts on where you see the gotchas with using Chat GPT for coding. For example, if an AI generates code for which the design is not prohibitively difficult for the human programmers involved to understand then do we end up down a rabbit hold of dropping automated tests and relying on AI to maintain more and more of the system whilst the knowledge and skill of humans atrophy?

Does this constitute a whole other level of technical debt?

What can we do to avoid the alluring short term productivity gain of AI code generation at the expense of such potentially disastrous delayed effects?

Expand full comment

if chatgpt starts filling the dessert of interesting things to read out there it will at best become netflix ... i feel we r trending towards idiocracy

Expand full comment

It will improve Disney.

Expand full comment

I feel like there is a lot of value in having received an education before AI became widely available -- and the number of people who had the "advantage" to grow up in an AI-free world will never be greater than it is now. Also, currently, AI is mostly trained on the data produced by experts, and its output frequently needs to be checked by experts, but could the available pool of such experts in various fields be negatively affected by our over-reliance on AI, especially during the process of education?

Expand full comment

By unleashing ChatGPT on a company’s code base, wouldn’t one be exposing the company to risks that would result in the unleashing being considered criminal, or negligent, or in other ways nefarious?

Expand full comment