Diagnosing A Bad Prompt
Swipe to show menu
Even with a solid understanding of techniques and components, you will write prompts that don't produce what you need. This is normal. The skill is not avoiding bad outputs — it's knowing how to read them, understand what went wrong, and fix the problem efficiently.
This chapter gives you a systematic way to do that.
The Five Most Common Prompt Failures
Failure 1 — The task is unclear
The model interprets your request differently than you intended because the instruction was ambiguous.
Signs: the output addresses a different version of your question, or the model asks for clarification.
Fix: restate the task using a specific action verb. Replace "help me with" with "write," "summarize," "list," or "compare."
Failure 2 — Missing context
The model doesn't have the information it needs to tailor the output to your situation. It produces something generic because it has no choice.
Signs: the output is technically correct but feels like it could have been written for anyone, about anything.
Fix: add context — who you are, who the output is for, what situation you're dealing with, and what has already happened.
Failure 3 — No format specified
The model chooses a structure that doesn't match how you'll use the output.
Signs: you needed bullet points and got paragraphs; you needed a table and got a list; the response is five times longer than you needed.
Fix: specify exactly what format you want — and if length matters, give a number.
Failure 4 — Too many tasks in one prompt
You asked the model to do several different things at once and it did all of them poorly.
Signs: the output covers everything you asked for but none of it is good enough to use directly.
Fix: break the prompt into separate, focused requests. Do one task per prompt, then build on the output.
Failure 5 — Wrong direction from the start
The fundamental approach the model took was off — not the execution, but the direction itself.
Signs: the output is well-written and well-structured, but it's solving the wrong problem or taking the wrong angle.
Fix: don't iterate on a flawed foundation. Start a new prompt with the lessons from the failed attempt explicitly built in as constraints.
A Practical Diagnostic Sequence
When a response misses the mark, run through these questions before writing your follow-up:
- Was the task explicit? Could the model have interpreted it in a different but equally valid way?
- Did you provide enough context? What did you assume the model knew that it couldn't have known?
- Did you specify format and length? If not — did the model's choice work for your use case?
- Were you asking for one thing or several? If several — which one matters most right now?
- Is the direction fundamentally wrong? If yes — stop iterating and start over.
You don't need to answer all five every time. Most failed prompts have one primary cause — identify it, fix it, and move on.
Practice: Identify The Failure Type
Take a prompt you've used recently that produced a disappointing result — or write one now and deliberately make it vague.
Read the output and use the five failure types above to identify what went wrong. Then rewrite the prompt to address only that specific failure — no other changes.
Compare the two outputs. In most cases, a single targeted fix is enough to move from unusable to useful.
In Section 3, you'll apply everything from these first two sections to the specific tasks you face most often at work — starting with writing.
1. Which of the following is NOT one of the five most common prompt failures described in the chapter?
2. What is the recommended fix when a prompt's output is generic and could apply to anyone?
3. According to the chapter, what should you do if the direction of the prompt is fundamentally wrong?
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat