When AI Gives a Bad Answer: Diagnosing Your Prompt
Swipe to show menu
Even with everything you've learned, you'll sometimes get a response that misses the mark. That's normal. The difference between a frustrated user and an effective one is knowing how to diagnose what went wrong — and how to fix it without starting over.
The Most Common Prompting Mistakes
1. Too vague
The prompt doesn't give the AI enough to work with. The output is generic because the input was generic.
Fix: add context, audience, format, or constraints.
2. Too many tasks at once
Asking the AI to do five different things in one prompt often results in it doing all five poorly.
Fix: break it into separate prompts, or prioritize the most important task.
3. Assumed context
You know what you mean — but the AI doesn't. You referenced "the project" or "our client" without explaining who or what that is.
Fix: treat each prompt as if the AI knows nothing about your situation.
4. Wrong format specified (or none)
The AI gives you three paragraphs when you needed a list. Or writes 500 words when you needed 50.
Fix: always specify format and length, especially for content you'll use directly.
5. Anchored on a bad first draft
You keep iterating on a response that was fundamentally wrong from the start, rather than scrapping it.
Fix: if the direction is wrong, don't polish — restart with a better prompt.
How to Diagnose a Bad Output
When the AI's response isn't what you wanted, ask yourself:
- Was my task clear? Could the AI have interpreted it differently?
- Did I give enough context? What did I assume the AI knew that it couldn't have known?
- Did I specify format? Did I get paragraphs when I wanted a list?
- Was the direction fundamentally wrong? If so — don't iterate, restart.
Most of the time, a bad output is fixable with one targeted follow-up.
Try: "That's not quite what I needed. What I'm actually looking for
is [clarification]. Can you try again?"
A Note on AI Confidence
AI always sounds confident. It doesn't hedge the way a human expert would — it delivers its output in a consistent, assured tone regardless of whether the content is accurate or not.
This means: never mistake fluency for accuracy. A response can be beautifully written and completely wrong. The quality of the language is not a signal of the quality of the information.
This is especially important for facts, statistics, and specific claims — always verify these from a reliable source. Section 3 covers this in depth.
You now have everything you need to write effective prompts, iterate with confidence, and diagnose what went wrong when a response misses the mark.
In Section 3, we shift from getting good results to staying safe — understanding hallucinations, protecting sensitive data, and being a responsible AI user at work.
1. Which of the following is NOT listed as a common prompting mistake in this chapter?
2. Which of the following are recommended questions to ask yourself when diagnosing a bad AI output to improve your prompt?
3. Why is AI confidence not a reliable indicator of accuracy?
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat