Back to glossaryExternal reference
AI GLOSSARY
Hallucination
Large Language Model (LLM) Terms
When a language model generates content that is factually incorrect, fabricated, or unsupported by its input, but presents it with apparent confidence. The term entered mainstream AI discourse around 2021 and has some critics who prefer "confabulation" on the grounds that it is more technically accurate. Hallucination is one of the most significant challenges in deploying language models for tasks where accuracy matters, and stems from models being trained to generate plausible-sounding text rather than verified facts.
See also: grounding, grounded generation, retrieval-augmented generation.