Hallucination
Also known as: AI Hallucination, Confabulation
In one sentence
When an AI model confidently generates false, fabricated, or nonsensical information as if it were fact. The model isn't lying—it's producing statistically plausible text that happens to be wrong.
Explain like I'm 12
Imagine a friend who always gives you an answer, even when they don't actually know. They're not trying to trick you—they just fill in gaps with their best guess and sound really sure about it.
In context
Hallucinations show up in many forms. ChatGPT might invent academic citations that don't exist, complete with fake authors and journal names. A coding assistant could reference API methods that were never part of a library. Google's Bard once stated the James Webb Space Telescope took the first picture of an exoplanet, which was incorrect. These errors are especially dangerous because the AI's confident tone makes them hard to spot without fact-checking.
See also
Related Guides
Learn more about Hallucination in these guides:
Evaluating AI Answers (Hallucinations, Checks, and Evidence)
IntermediateHow to spot when AI gets it wrong. Practical techniques to verify accuracy, detect hallucinations, and build trust in AI outputs.
10 min readAI Failure Modes and Mitigations: When AI Goes Wrong
IntermediateUnderstand how AI systems fail and how to prevent failures. From hallucinations to catastrophic errors—learn to anticipate, detect, and handle AI failures gracefully.
11 min readAI Tools Compared: ChatGPT vs Claude vs Gemini vs Copilot (2026)
BeginnerA living comparison of the major AI tools, updated as models and pricing change. Last updated February 2026 with GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, and the rise of open-source challengers.
18 min read