Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Diagnosing A Bad Prompt | Core Prompting Techniques
Prompt Engineering for Work

bookDiagnosing A Bad Prompt

Swipe to show menu

Even with a solid understanding of techniques and components, you will write prompts that don't produce what you need. This is normal. The skill is not avoiding bad outputs — it's knowing how to read them, understand what went wrong, and fix the problem efficiently.

This chapter gives you a systematic way to do that.

The Five Most Common Prompt Failures

Failure 1 — The task is unclear

The model interprets your request differently than you intended because the instruction was ambiguous.

Signs: the output addresses a different version of your question, or the model asks for clarification.

Fix: restate the task using a specific action verb. Replace "help me with" with "write," "summarize," "list," or "compare."

Failure 2 — Missing context

The model doesn't have the information it needs to tailor the output to your situation. It produces something generic because it has no choice.

Signs: the output is technically correct but feels like it could have been written for anyone, about anything.

Fix: add context — who you are, who the output is for, what situation you're dealing with, and what has already happened.

Failure 3 — No format specified

The model chooses a structure that doesn't match how you'll use the output.

Signs: you needed bullet points and got paragraphs; you needed a table and got a list; the response is five times longer than you needed.

Fix: specify exactly what format you want — and if length matters, give a number.

Failure 4 — Too many tasks in one prompt

You asked the model to do several different things at once and it did all of them poorly.

Signs: the output covers everything you asked for but none of it is good enough to use directly.

Fix: break the prompt into separate, focused requests. Do one task per prompt, then build on the output.

Failure 5 — Wrong direction from the start

The fundamental approach the model took was off — not the execution, but the direction itself.

Signs: the output is well-written and well-structured, but it's solving the wrong problem or taking the wrong angle.

Fix: don't iterate on a flawed foundation. Start a new prompt with the lessons from the failed attempt explicitly built in as constraints.

Screenshot description: A clean diagnostic card graphic — not a screenshot of an AI tool. Title at the top: "Prompt Failure Diagnostic." Five rows, each with a failure type on the left and a one-line fix on the right, connected by an arrow. Row 1: "Output is off-topic or misinterpreted" → "Restate the task with a specific action verb." Row 2: "Output is generic, could apply to anyone" → "Add context: who, for whom, what situation." Row 3: "Wrong structure or length" → "Specify format and length explicitly." Row 4: "Everything covered, nothing done well" → "One task per prompt." Row 5: "Right execution, wrong direction" → "Start fresh — don't iterate on a bad foundation." Clean layout, professional typography, subtle left border in amber to signal "diagnostic" rather than "checklist." Readable at a glance.

A Practical Diagnostic Sequence

When a response misses the mark, run through these questions before writing your follow-up:

  • Was the task explicit? Could the model have interpreted it in a different but equally valid way?
  • Did you provide enough context? What did you assume the model knew that it couldn't have known?
  • Did you specify format and length? If not — did the model's choice work for your use case?
  • Were you asking for one thing or several? If several — which one matters most right now?
  • Is the direction fundamentally wrong? If yes — stop iterating and start over.

You don't need to answer all five every time. Most failed prompts have one primary cause — identify it, fix it, and move on.

Practice: Identify The Failure Type

Take a prompt you've used recently that produced a disappointing result — or write one now and deliberately make it vague.

Read the output and use the five failure types above to identify what went wrong. Then rewrite the prompt to address only that specific failure — no other changes.

Compare the two outputs. In most cases, a single targeted fix is enough to move from unusable to useful.

In Section 3, you'll apply everything from these first two sections to the specific tasks you face most often at work — starting with writing.

1. Which of the following is NOT one of the five most common prompt failures described in the chapter?

2. What is the recommended fix when a prompt's output is generic and could apply to anyone?

3. According to the chapter, what should you do if the direction of the prompt is fundamentally wrong?

question mark

Which of the following is NOT one of the five most common prompt failures described in the chapter?

Select the correct answer

question mark

What is the recommended fix when a prompt's output is generic and could apply to anyone?

Select the correct answer

question mark

According to the chapter, what should you do if the direction of the prompt is fundamentally wrong?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 2. Chapter 5

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

Section 2. Chapter 5
some-alt