Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Hallucinations. Why AI Confidently Gets Things Wrong | Risks, Limitations and Responsible Use
Understanding AI for Work

bookHallucinations. Why AI Confidently Gets Things Wrong

Swipe to show menu

You've learned how to get useful results from AI. Now it's time to learn when not to trust them.

AI tools are fluent, confident, and fast. They're also capable of producing information that sounds completely plausible — and is entirely wrong. Understanding why this happens is one of the most important things you can take away from this course.

What Is a Hallucination?

Note
Definition

In AI, a "hallucination" is when the model generates content that is factually incorrect, fabricated, or not grounded in reality — but presents it with the same confident tone as accurate information.

Examples of hallucinations in the wild:

  • A lawyer submits a legal brief citing six court cases. All six were invented by ChatGPT. None of them existed;
  • An AI-generated product description includes a technical specification that sounds credible but is completely made up;
  • A summary of a research paper contains a statistic that never appeared in the original document;
  • An AI recommends a specific regulation or law that doesn't exist in the jurisdiction mentioned.

The AI does not know it is wrong. It is not lying. It is doing exactly what it's designed to do — generating the most statistically likely continuation of the text — and in these cases, that process produces false output.

Screenshot description: A chat window showing a user asking: "What were the main findings of the 2021 Nielsen report on remote work productivity?" The AI responds with a detailed, confident-sounding summary — specific percentages, named authors, key conclusions — all presented as fact. Below the response, a red annotation box overlays the output with the label: "This report does not exist. All details were fabricated by the model." The AI response itself has no hedging language — it reads as authoritative. The contrast between the confident tone and the fabricated content is the point. No actual fake citations should look real enough to be copied — use clearly placeholder names like "Nielsen 2021 Remote Work Insights Report, authored by J. Harlow and S. Müller."

Why Does This Happen?

Recall from Section 1: AI predicts the next token based on patterns. It has no internal fact-checker. It has no awareness of what it knows versus what it doesn't know.

When the model encounters a question it cannot answer reliably, it doesn't stop — it generates a response that fits the pattern of what a correct answer would look like. The result is content that is fluent, structured, and wrong.

Hallucinations are more likely when:

  • You ask about very specific facts, statistics, or citations;
  • You ask about recent events after the model's training cutoff date;
  • You ask about niche topics with limited training data;
  • The question has a "fill in the blank" structure that invites fabrication.

What Hallucinations Are Not

It is worth being precise about this:

  • Hallucinations are not the AI being deceptive or malicious;
  • They are not a sign that the AI is broken or unusable;
  • They are not random errors — they follow predictable patterns;
  • They are not unique to one tool — all major AI systems hallucinate.

They are a structural property of how language models work. The right response is not to avoid AI — it is to know when to verify.

The Golden Rule: Fluency Is Not Accuracy

The single most important thing to internalize about AI output:

A response can be beautifully written, logically structured, and completely wrong.

The quality of the language tells you nothing about the quality of the information. AI writes with consistent confidence regardless of whether it is correct. Always treat facts, statistics, names, dates, and citations as unverified until you check them.

1. Which of the following best describes an AI hallucination?

2. Why do AI models like ChatGPT sometimes produce information that sounds correct but is actually false, and what does this mean for users?

question mark

Which of the following best describes an AI hallucination?

Select the correct answer

question mark

Why do AI models like ChatGPT sometimes produce information that sounds correct but is actually false, and what does this mean for users?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 3. Chapter 1

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

Section 3. Chapter 1
some-alt