How Large Language Models Understand Prompts
Large language models (LLMs) process prompts by breaking down the input text into smaller units called tokens. The model uses these tokens to understand the meaning and context of your instructions, then generates a response based on patterns it has learned from vast amounts of data.
Token is a piece of text, such as a word or part of a word, that the model processes individually.
LLMs do not "think" like humans. They predict the next word or phrase based on the input prompt and their training data.
If your prompt is too long, the model may ignore earlier parts of the input. This size of the input called context window.
Context Window is the maximum number of tokens an LLM can consider at one time when generating a response.
Example
If you ask, Write a poem about the ocean, the model interprets each word as a token and uses the context to generate a relevant poem. If you add more details, such as Write a four-line poem about the ocean using vivid imagery, the model uses the extra context to tailor its response.
Being aware of the context window helps you avoid losing important information in long prompts.
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Awesome!
Completion rate improved to 11.11
How Large Language Models Understand Prompts
Swipe to show menu
Large language models (LLMs) process prompts by breaking down the input text into smaller units called tokens. The model uses these tokens to understand the meaning and context of your instructions, then generates a response based on patterns it has learned from vast amounts of data.
Token is a piece of text, such as a word or part of a word, that the model processes individually.
LLMs do not "think" like humans. They predict the next word or phrase based on the input prompt and their training data.
If your prompt is too long, the model may ignore earlier parts of the input. This size of the input called context window.
Context Window is the maximum number of tokens an LLM can consider at one time when generating a response.
Example
If you ask, Write a poem about the ocean, the model interprets each word as a token and uses the context to generate a relevant poem. If you add more details, such as Write a four-line poem about the ocean using vivid imagery, the model uses the extra context to tailor its response.
Being aware of the context window helps you avoid losing important information in long prompts.
Thanks for your feedback!