How Language Models "Think"
Свайпніть щоб показати меню
You don't need to understand how a car engine works to drive — but knowing that it runs on fuel helps you avoid running out of gas. The same logic applies to AI. You don't need a computer science degree, but understanding one core idea will make everything else in this course click.
Prediction, The Core Idea
Large language models (LLMs) — the technology behind ChatGPT, Claude, Gemini, and others — work by predicting what comes next.
Given a sequence of words, the model calculates which word (or phrase) is most likely to follow, based on patterns it learned from enormous amounts of text: books, articles, websites, code, and more.
It's similar to the autocomplete on your phone — except trained on effectively the entire internet, with vastly more sophistication.
What Are Tokens?
AI doesn't read words the way you do. It breaks text into small chunks called tokens — roughly corresponding to words or parts of words.
For example:
- "running" might be one token;
- "unbelievable" might be split into "un" + "believ" + "able";
- Even spaces and punctuation are tokens.
This is why AI sometimes handles unusual words awkwardly, or why very long inputs slow things down — every token takes processing power.
For practical use, the main thing to know is this: the more tokens in your conversation, the more context the model has — and the more it costs to run (which is why free plans have limits).
Why AI Sometimes Makes Things Up
The model is predicting what sounds right, it doesn't always produce what is factually correct. When it encounters a topic outside its training data, or a question it can't answer confidently, it doesn't say "I don't know" — it generates a plausible-sounding response anyway.
This is called a hallucination.
It's not a bug, and it's not the AI lying to you. It's a fundamental property of how prediction works. Knowing this is the first step to using AI safely. We'll cover it in depth in Section 3.
AI predicts — it doesn't truly know. This one insight explains why good prompts matter, why you should verify important facts, and why human judgment is never optional when working with AI.
1. What is the core idea behind how large language models like ChatGPT work?
2. Why does AI sometimes generate responses that are not factually correct?
Дякуємо за ваш відгук!
Запитати АІ
Запитати АІ
Запитайте про що завгодно або спробуйте одне із запропонованих запитань, щоб почати наш чат