Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Data Privacy. What You Should Never Share with AI | Risks, Limitations and Responsible Use
Understanding AI for Work

bookData Privacy. What You Should Never Share with AI

Swipe to show menu

Prompt engineering makes AI more useful. But knowing what not to put into a prompt is just as important as knowing how to write a good one.

When you type something into an AI tool, you are sending that text to an external server operated by a third-party company. Understanding what that means — and what risks it creates — is essential for anyone using AI at work.

How AI Tools Handle Your Input

Most consumer-facing AI tools — including free tiers of ChatGPT, Claude, and Gemini — may use your conversations to improve their models, unless you explicitly opt out.

This means that text you enter could, in principle, be reviewed by humans at the company, used in training data, or stored for extended periods.

Enterprise versions of these tools (such as Microsoft Copilot for Microsoft 365, or Claude for Enterprise) typically offer stronger data privacy guarantees — including commitments not to use your data for training. But these require a paid organizational subscription and specific configuration.

If you are unsure which version your organization uses — ask your IT or security team before putting sensitive information into any AI tool.

What You Should Never Enter into a Consumer AI Tool

As a general rule, treat AI chat interfaces the same way you would treat a public forum — assume the content could be seen by others.

Never enter:

  • Personal data of clients, customers, or employees — names, emails, phone numbers, ID numbers, addresses;
  • Financial data — account numbers, transaction details, salary information, budget figures that are not public;
  • Health or medical information about any individual;
  • Passwords, API keys, or authentication credentials of any kind;
  • Confidential business information — unreleased product details, M&A discussions, strategic plans, proprietary research;
  • Legal documents containing privileged or confidential content.
Screenshot description: A two-panel visual. Left panel labeled "What you typed" — a chat prompt box containing a realistic but fictional example of a bad practice: "Here are the Q3 sales figures for our top 10 clients: [a short table with fictional company names and revenue numbers]. Summarize the key trends." Right panel labeled "Where it goes" — a simple diagram showing the text leaving the user's browser, traveling to a cloud server icon labeled "AI provider's servers," with a dotted line branching off labeled "May be used for model training or reviewed by staff." A red warning label across the bottom: "Confidential business data should never be entered into a consumer AI tool." All company names and numbers in the example are clearly fictional.

Safe Alternatives: How to Get the Help Without the Risk

Needing to protect sensitive data doesn't mean you can't use AI for those tasks. Here are practical workarounds:

  • Anonymize before prompting — replace real names with placeholders ("Client A," "Employee X") before pasting content into the AI;
  • Describe the situation without the data — instead of pasting a confidential document, describe the type of problem and ask for a framework or approach;
  • Use enterprise-grade tools — if your organization has licensed Microsoft 365 Copilot or similar, those tools typically operate under stricter data protection terms;
  • Keep sensitive content local — use AI to draft structure and language, then fill in the sensitive specifics yourself offline.

Know Your Organization's Policy

Many organizations are developing or have already published internal AI usage policies. These typically specify:

  • Which AI tools are approved for use at work;
  • What categories of data can and cannot be entered;
  • Whether a company-specific AI environment is available;
  • How to report concerns or incidents related to AI use.

If your organization has such a policy — follow it. If it doesn't — apply the conservative defaults above until guidance is provided.

question mark

Which of the following types of information should you avoid entering into a consumer-facing AI tool like ChatGPT at work?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 3. Chapter 3

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

Section 3. Chapter 3
some-alt