Skip to main content


I wrote a longer-form piece over at post.news about the problems with using the verb "hallucinate" to describe AI chatbots that make things up.

Here's the link for those that what it to read it formatted there.

post.news/article/2Lr2DCy9lQz0…

I'll serialize it here as well, below.

This entry was edited (2 years ago)
in reply to Carl T. Bergstrom

So the fundamental problem is not the inaccuracy of #AI word choices; it’s the inaccuracy of #ArtificialIntelligence *researchers’* word choices in describing the behavior they’ve created.

#ML #MachineLearning #ChatGPT #OpenAI #Bing #Bard

This entry was edited (2 years ago)
in reply to Carl T. Bergstrom

I know that wasn’t your conclusion, but here you say that “hallucinate” is established AI jargon, and you also say that both the medical and common understandings of the term do not apply. fediscience.org/@ct_bergstrom/…

IMHO it’s a fanciful metaphor that’s now breaking badly as people confuse AI output with evidence of consciousness. Ergo, a poor choice of words.


Yes, the term "hallucinate" has an established meaning as AI jargon. Loosely and in the context of large language models (LLMs) such as GPT-3, it refers to situation in which the AI makes claims that were not in the training set and which have no basis in fact.

But I want to look at how this use of language in public communications perpetuates misunderstandings about AI and helps distance the tech firms that create these systems from the consequences of their failures.


This entry was edited (2 years ago)
in reply to Carl T. Bergstrom

Some AIs even caution us directly in the fine print, as Galactica did.
This entry was edited (2 years ago)
in reply to Carl T. Bergstrom

Yes, the term "hallucinate" has an established meaning as AI jargon. Loosely and in the context of large language models (LLMs) such as GPT-3, it refers to situation in which the AI makes claims that were not in the training set and which have no basis in fact.

But I want to look at how this use of language in public communications perpetuates misunderstandings about AI and helps distance the tech firms that create these systems from the consequences of their failures.