Science and technology | Silicon dreamin’

AI models make stuff up. How can hallucinations be controlled?

It is hard to do so without also limiting models’ power

An illustration showing a laptop with a confused face repeated in a colourful spiral.
Illustration: Shira Inbar

It is an increasingly familiar experience. A request for help to a large language model (LLM) such as OpenAI’s ChatGPT is promptly met by a response that is confident, coherent and just plain wrong. In an AI model, such tendencies are usually described as hallucinations. A more informal word exists, however: these are the qualities of a great bullshitter.

There are kinder ways to put it. In its instructions to users, OpenAI warns that ChatGPT “can make mistakes”. Anthropic, an American AI company, says that its LLM Claude “may display incorrect or harmful information”; Google’s Gemini warns users to “double-check its responses”. The throughline is this: no matter how fluent and confident AI-generated text sounds, it still cannot be trusted.

Explore more

This article appeared in the Science & technology section of the print edition under the headline "Silicon dreamin’"

How high can markets go?

From the March 2nd 2024 edition

Discover stories from this section and more in the list of contents

Explore the edition

More from Science and technology

Archaeologists identify the birthplace of the mysterious Yamnaya

The ancient culture, which transformed Europe, was also less murderous than once thought

Producing fake information is getting easier

But that’s not the whole story, when it comes to AI


Disinformation is on the rise. How does it work?

Understanding it will lead to better ways to fight it