Hallucinations. Why AI Confidently Gets Things Wrong
Swipe to show menu
You've learned how to get useful results from AI. Now it's time to learn when not to trust them.
AI tools are fluent, confident, and fast. They're also capable of producing information that sounds completely plausible — and is entirely wrong. Understanding why this happens is one of the most important things you can take away from this course.
What Is a Hallucination?
In AI, a "hallucination" is when the model generates content that is factually incorrect, fabricated, or not grounded in reality — but presents it with the same confident tone as accurate information.
Examples of hallucinations in the wild:
- A lawyer submits a legal brief citing six court cases. All six were invented by ChatGPT. None of them existed;
- An AI-generated product description includes a technical specification that sounds credible but is completely made up;
- A summary of a research paper contains a statistic that never appeared in the original document;
- An AI recommends a specific regulation or law that doesn't exist in the jurisdiction mentioned.
The AI does not know it is wrong. It is not lying. It is doing exactly what it's designed to do — generating the most statistically likely continuation of the text — and in these cases, that process produces false output.
Why Does This Happen?
Recall from Section 1: AI predicts the next token based on patterns. It has no internal fact-checker. It has no awareness of what it knows versus what it doesn't know.
When the model encounters a question it cannot answer reliably, it doesn't stop — it generates a response that fits the pattern of what a correct answer would look like. The result is content that is fluent, structured, and wrong.
Hallucinations are more likely when:
- You ask about very specific facts, statistics, or citations;
- You ask about recent events after the model's training cutoff date;
- You ask about niche topics with limited training data;
- The question has a "fill in the blank" structure that invites fabrication.
What Hallucinations Are Not
It is worth being precise about this:
- Hallucinations are not the AI being deceptive or malicious;
- They are not a sign that the AI is broken or unusable;
- They are not random errors — they follow predictable patterns;
- They are not unique to one tool — all major AI systems hallucinate.
They are a structural property of how language models work. The right response is not to avoid AI — it is to know when to verify.
The Golden Rule: Fluency Is Not Accuracy
The single most important thing to internalize about AI output:
A response can be beautifully written, logically structured, and completely wrong.
The quality of the language tells you nothing about the quality of the information. AI writes with consistent confidence regardless of whether it is correct. Always treat facts, statistics, names, dates, and citations as unverified until you check them.
1. Which of the following best describes an AI hallucination?
2. Why do AI models like ChatGPT sometimes produce information that sounds correct but is actually false, and what does this mean for users?
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat