Knowing The Real Limits
Swipe to show menu
Good prompts can do a lot. They can't do everything. Part of using AI effectively is recognizing the situations where a better prompt is not the solution — where the limitation is structural, and no amount of technique will produce a reliable result.
Knowing these limits saves you time and prevents you from trusting outputs you shouldn't.
Limit 1 — Information That Doesn't Exist In The Training Data
AI models are trained on data up to a specific cutoff date. Anything that happened after that date is outside what the model knows — and prompting it more carefully won't change that.
This affects:
- Recent news, regulatory changes, or market developments;
- New product releases, updated pricing, or recent research;
- Events, decisions, or announcements that postdate the model's training.
What prompting can't fix: the missing information is simply not there. The model will often produce a plausible-sounding response anyway — which is exactly the danger.
What you can do instead: use a tool with web search enabled (ChatGPT with Browse, Perplexity, Gemini with Search) for time-sensitive information, or paste the relevant current information directly into your prompt so the model can work with it.
Limit 2 — Verified Facts, Citations, And Specific Data
AI generates plausible text. For well-covered topics, that text is often accurate. For specific facts — statistics, citations, legal references, study findings — it is frequently wrong in ways that are impossible to detect from the output alone.
No prompt technique reliably fixes this because the problem isn't in how the question is asked — it's in what the model can reliably produce.
What prompting can't fix: asking the model to "be accurate" or "only use verified sources" doesn't give it access to information it doesn't have. It may reduce hallucinations slightly but won't eliminate them.
What you can do instead: use AI to generate the structure and language of content that requires specific facts, then fill in the verified data yourself from primary sources. Treat any specific claim from AI as unverified until checked.
Limit 3 — Your Organization's Specific Context
The model has no knowledge of your company's internal situation — your strategy, your team dynamics, your client relationships, your product roadmap, your culture, or the history of any decision you're working on.
When you ask AI for advice, recommendations, or analysis about your specific situation, the output is based on general patterns — not your actual context. It will sound applicable but may be completely misaligned with your reality.
What prompting can fix — partially: you can provide context in your prompt, and the more specific that context is, the more tailored the output will be. But there are limits to how much context fits in a prompt, and the model can only work with what you explicitly give it.
What to watch for: AI recommendations that are technically sound but ignore the organizational, political, or relational constraints that make the "obvious" solution unworkable in your specific environment.
Limit 4 — Judgment, Values, And Decisions That Affect People
AI can help you think through a decision. It cannot make the decision for you — and it shouldn't.
Tasks where human judgment must remain primary:
- Hiring, performance, and compensation decisions;
- Strategic choices with significant organizational consequences;
- Communications that carry legal, ethical, or reputational weight;
- Any situation where the nuances of relationships, culture, or individual circumstances determine the right outcome.
What prompting can't fix: AI has no stake in the outcome, no knowledge of the people involved, and no accountability for the consequences. These are the exact things that make judgment possible.
What AI can do: surface options, structure your thinking, anticipate objections, and draft communications. The actual decision — and the responsibility for it — belongs to you.
A Practical Test Before You Trust An Output
Before acting on any AI output that involves facts, recommendations, or decisions, run through these four questions:
- Could this information be outdated? If yes — verify with a current source;
- Does this contain specific claims I haven't verified? If yes — check each one before using;
- Does this recommendation account for my actual context? If no — treat it as a starting point, not a conclusion;
- Am I using this output to make a decision that affects people? If yes — your judgment, not the AI's output, should be the deciding factor.
These questions take thirty seconds. They're the difference between using AI as a tool and being led by it.
1. Which of the following is a reason why AI might produce unreliable outputs according to the chapter?
2. What should you do before trusting AI outputs that involve important decisions or facts?
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat