Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn How Language Models "Think" | How Modern AI Works
Understanding AI for Work

bookHow Language Models "Think"

Swipe to show menu

You don't need to understand how a car engine works to drive — but knowing that it runs on fuel helps you avoid running out of gas. The same logic applies to AI. You don't need a computer science degree, but understanding one core idea will make everything else in this course click.

Prediction, The Core Idea

Large language models (LLMs) — the technology behind ChatGPT, Claude, Gemini, and others — work by predicting what comes next.

Given a sequence of words, the model calculates which word (or phrase) is most likely to follow, based on patterns it learned from enormous amounts of text: books, articles, websites, code, and more.

It's similar to the autocomplete on your phone — except trained on effectively the entire internet, with vastly more sophistication.

Screenshot description: A clean, horizontal diagram with three steps connected by arrows. Step 1 — a text box labeled "Your input" containing: "The weather today is…". Step 2 — a box labeled "Model predicts the most likely next word" showing three options with probabilities: "sunny" 42%, "cold" 31%, "unpredictable" 27%. Step 3 — a box labeled "Output builds up word by word". Simple, flat design, no technical jargon anywhere on the diagram.

What Are Tokens?

AI doesn't read words the way you do. It breaks text into small chunks called tokens — roughly corresponding to words or parts of words.

For example:

  • "running" might be one token;
  • "unbelievable" might be split into "un" + "believ" + "able";
  • Even spaces and punctuation are tokens.

This is why AI sometimes handles unusual words awkwardly, or why very long inputs slow things down — every token takes processing power.

For practical use, the main thing to know is this: the more tokens in your conversation, the more context the model has — and the more it costs to run (which is why free plans have limits).

Why AI Sometimes Makes Things Up

The model is predicting what sounds right, it doesn't always produce what is factually correct. When it encounters a topic outside its training data, or a question it can't answer confidently, it doesn't say "I don't know" — it generates a plausible-sounding response anyway.

This is called a hallucination.

It's not a bug, and it's not the AI lying to you. It's a fundamental property of how prediction works. Knowing this is the first step to using AI safely. We'll cover it in depth in Section 3.

AI predicts — it doesn't truly know. This one insight explains why good prompts matter, why you should verify important facts, and why human judgment is never optional when working with AI.

1. What is the core idea behind how large language models like ChatGPT work?

2. Why does AI sometimes generate responses that are not factually correct?

question mark

What is the core idea behind how large language models like ChatGPT work?

Select the correct answer

question mark

Why does AI sometimes generate responses that are not factually correct?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 1. Chapter 2

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

Section 1. Chapter 2
some-alt