Why Visier
Products
Solutions
Developers
Resources
Customers

What Is AI Hallucination?

AI hallucinations occur when a generative AI application provides fabricated, inaccurate, or implausible information and outputs. Learn more.

2M Read
Visier HR Glossary

Artificial intelligence (AI) hallucinations—or fabrications—are inaccurate, implausible, or wholly made-up outputs provided in response to a prompt made in a generative AI application. They occur when a large language model (LLM), like ChatGPT or Google Bard, generates false information.

What causes AI to hallucinate?

AI hallucinations are caused when the models have learned or have been trained imperfectly. This can be caused by errors or biases in the data used to train it, or an incorrect interpretation of that data. Zapier explains: “Because AI tools like ChatGPT work by predicting strings of words that it thinks best match your query, they lack the reasoning to apply logic or consider any factual inconsistencies they’re spitting out.” In other words, they say, “AI will sometimes go off the rails trying to please you.”

What is an example of an AI hallucination?

One example that has been widely shared in the media and on social media channels is when Google’s own promotional video for its generative AI product, Bard, included an untrue claim. The claim made was that the James Webb Space Telescope took the first image of an exoplanet. It wasn’t, as one social media user quickly pointed out. 

How can I stop AI from hallucinating?

OpenAI has announced that it is taking steps to combat hallucinations through a new AI training method designed to reward individual steps in reasoning rather than only the final conclusion. This is referred to as “process supervision,” vs. “outcome supervision.”

Users can also help to minimize AI hallucination by being very specific in their prompts, including any unique data or sources they have access to, limiting the possible outcomes, providing context by telling the AI what its role is, and explicitly telling it not to lie or make up answers.

It’s also important to carefully review and fact-check content generated to ensure that it is accurate.

Read more on GenAI:    

Back to blog
Back to blog

Recommended resources

All resources