Data Privacy. What You Should Never Share with AI
Swipe to show menu
Prompt engineering makes AI more useful. But knowing what not to put into a prompt is just as important as knowing how to write a good one.
When you type something into an AI tool, you are sending that text to an external server operated by a third-party company. Understanding what that means — and what risks it creates — is essential for anyone using AI at work.
How AI Tools Handle Your Input
Most consumer-facing AI tools — including free tiers of ChatGPT, Claude, and Gemini — may use your conversations to improve their models, unless you explicitly opt out.
This means that text you enter could, in principle, be reviewed by humans at the company, used in training data, or stored for extended periods.
Enterprise versions of these tools (such as Microsoft Copilot for Microsoft 365, or Claude for Enterprise) typically offer stronger data privacy guarantees — including commitments not to use your data for training. But these require a paid organizational subscription and specific configuration.
If you are unsure which version your organization uses — ask your IT or security team before putting sensitive information into any AI tool.
What You Should Never Enter into a Consumer AI Tool
As a general rule, treat AI chat interfaces the same way you would treat a public forum — assume the content could be seen by others.
Never enter:
- Personal data of clients, customers, or employees — names, emails, phone numbers, ID numbers, addresses;
- Financial data — account numbers, transaction details, salary information, budget figures that are not public;
- Health or medical information about any individual;
- Passwords, API keys, or authentication credentials of any kind;
- Confidential business information — unreleased product details, M&A discussions, strategic plans, proprietary research;
- Legal documents containing privileged or confidential content.
Safe Alternatives: How to Get the Help Without the Risk
Needing to protect sensitive data doesn't mean you can't use AI for those tasks. Here are practical workarounds:
- Anonymize before prompting — replace real names with placeholders ("Client A," "Employee X") before pasting content into the AI;
- Describe the situation without the data — instead of pasting a confidential document, describe the type of problem and ask for a framework or approach;
- Use enterprise-grade tools — if your organization has licensed Microsoft 365 Copilot or similar, those tools typically operate under stricter data protection terms;
- Keep sensitive content local — use AI to draft structure and language, then fill in the sensitive specifics yourself offline.
Know Your Organization's Policy
Many organizations are developing or have already published internal AI usage policies. These typically specify:
- Which AI tools are approved for use at work;
- What categories of data can and cannot be entered;
- Whether a company-specific AI environment is available;
- How to report concerns or incidents related to AI use.
If your organization has such a policy — follow it. If it doesn't — apply the conservative defaults above until guidance is provided.
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat