Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn How Large Language Models Understand Prompts | Foundations of Prompt Engineering
Prompt Engineering Basics

bookHow Large Language Models Understand Prompts

Large language models (LLMs) process prompts by breaking down the input text into smaller units called tokens. The model uses these tokens to understand the meaning and context of your instructions, then generates a response based on patterns it has learned from vast amounts of data.

Note
Definition

Token is a piece of text, such as a word or part of a word, that the model processes individually.

LLMs do not "think" like humans. They predict the next word or phrase based on the input prompt and their training data.

If your prompt is too long, the model may ignore earlier parts of the input. This size of the input called context window.

Note
Definition

Context Window is the maximum number of tokens an LLM can consider at one time when generating a response.

Example

If you ask, Write a poem about the ocean, the model interprets each word as a token and uses the context to generate a relevant poem. If you add more details, such as Write a four-line poem about the ocean using vivid imagery, the model uses the extra context to tailor its response.

Note
Quick Reminder

Being aware of the context window helps you avoid losing important information in long prompts.

question mark

What is a token in the context of LLMs?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 1. ChapterΒ 2

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

Awesome!

Completion rate improved to 11.11

bookHow Large Language Models Understand Prompts

Swipe to show menu

Large language models (LLMs) process prompts by breaking down the input text into smaller units called tokens. The model uses these tokens to understand the meaning and context of your instructions, then generates a response based on patterns it has learned from vast amounts of data.

Note
Definition

Token is a piece of text, such as a word or part of a word, that the model processes individually.

LLMs do not "think" like humans. They predict the next word or phrase based on the input prompt and their training data.

If your prompt is too long, the model may ignore earlier parts of the input. This size of the input called context window.

Note
Definition

Context Window is the maximum number of tokens an LLM can consider at one time when generating a response.

Example

If you ask, Write a poem about the ocean, the model interprets each word as a token and uses the context to generate a relevant poem. If you add more details, such as Write a four-line poem about the ocean using vivid imagery, the model uses the extra context to tailor its response.

Note
Quick Reminder

Being aware of the context window helps you avoid losing important information in long prompts.

question mark

What is a token in the context of LLMs?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 1. ChapterΒ 2
some-alt