Question the hype: How useful is ChatGPT?
Hot take: ChatGPT is not that useful. With the current OpenAI storm increasing the hype, now is a great time to question it.
More often than companies like OpenAI want to admit, generative AI is the wrong tool for the job. It is time to admit that, often, generative AI is a hammer looking for a nail.
Here are two examples from my own work.
➤ Summarizing text
😕Using generative AI to summarize texts never works well for me because the summaries fail to reflect what I think is important in the text.
😕The cause is that summarizing text is not a technical task. It requires decisions on what is important, which depends on context and values.
😕Can you put the context and values into the prompt? Maybe sometimes. But that hasn't worked well in my case, and the attempts take longer than just summarizing the text myself.
➤ First drafts, brainstorming, and outlining
😕This is the worst use of generative AI for me. The output is usually banal. But the main problem is that it constrains my thinking and wastes time.
😕Writing the first draft is sensitive work because it shapes how you think about the project. It's difficult to recover from a bad first draft because it's difficult to re-imagine what you've already imagined.
😕Can you solve the problem with better prompt engineering? Maybe sometimes. But when the task is to articulate a thought baking in your mind, generative AI can lead you astray. It gives you a mesh of other people's thoughts and that can hide your own, especially at an early stage in which they're not fully formed.
➤ All of this is not to say that generative AI is useless.
I do think it's a revolutionary technology with many helpful applications, including in my own work. For example, the image in the banner is AI-generated! But the usefulness is not to the extent that companies like OpenAI would have us believe.