- Any text you type into an AI chat — including passwords, API keys, or credentials — is transmitted to and processed by the AI provider's servers
- On consumer platforms (ChatGPT free, Gemini personal), pasted credentials are stored in conversation history and potentially used for training
- There is no technical filter that blocks or masks sensitive data before it reaches the AI provider
- Even "deleted" conversations may be retained for up to 30 days for safety review
- Organizations should establish clear policies about what data may never be submitted to any AI tool
When you type or paste text into an AI chat interface, that text is sent as an API request to the AI provider's servers for processing. The provider receives the complete, unmasked text — including any passwords, API keys, credentials, or sensitive identifiers you include. From that point, what happens to that data depends entirely on the platform's privacy policy and your account type.
What happens when you type into an AI chat
The technical journey of a message is straightforward: your keystrokes form text in your browser, that text is transmitted over HTTPS to the AI provider’s API, processed by the model, and a response is returned. At every point in that chain, the provider’s servers handle your full, unmasked input.
This is not unique to AI tools — it’s how all web services work. When you type a search query, send a message, or fill in a form online, that data travels to a server. The difference with AI is what happens after: conversation storage, potential training data use, and the nature of what people tend to share with an AI compared to a search box.
People share much more sensitive content with AI tools than with traditional software. They ask for help debugging code with real credentials embedded, they paste contract text to summarize, they describe employee situations in detail. The conversational format creates a false sense of privacy — it feels like a private conversation, but it is a transmission to a third-party service.
Types of sensitive data people accidentally paste
Several categories of data appear regularly in AI conversations, often without the user consciously registering the risk:
- Passwords and login credentials — often pasted to ask for help with authentication issues or config files
- API keys and access tokens — AWS, GitHub, Stripe, and similar services; commonly pasted when asking for help with code
- OAuth tokens and session cookies — appear in debugging contexts when users share browser dev tool output
- Database connection strings — frequently include usernames, passwords, and hostnames for production systems
- Personal data about employees or customers — names, email addresses, ID numbers, HR matters
- Financial data — account numbers, payment card details, invoices, P&L spreadsheet contents
- Health information — HIPAA-protected in the US and classified as special category data under GDPR
API key exposure is particularly serious: a leaked AWS key or GitHub token can grant an attacker persistent access to your infrastructure. Credentials shared with an AI provider are stored on their servers, potentially visible to provider staff, and (on consumer tiers) may exist in training datasets. If you’ve pasted an API key into ChatGPT or any other AI chat, treat it as compromised and rotate it immediately.
Where the data goes after you submit it
What happens to your input after submission varies significantly by platform and account type. The key variables are whether the conversation is stored, whether it may be used to train the model, and whether provider staff can access it.
| Platform | Stored? | Used for training? | Staff access? | Zero-retention option? |
|---|---|---|---|---|
| ChatGPT Free | Yes | Yes, by default | Limited staff | No |
| ChatGPT Enterprise | Yes | No | Limited staff | Yes |
| OpenAI API (ZDR) | No | No | No | Yes (default) |
| Google Gemini (free) | Yes | Yes, by default | Limited staff | No |
| Microsoft 365 Copilot | Yes | No | Limited staff | No |
The consumer tiers of ChatGPT and Gemini represent the highest risk: data is stored, may be used for training, and providers reserve the right for staff to review conversations. The enterprise and API tiers offer meaningfully stronger protections — but most employees using AI on their own initiative are on consumer accounts, not enterprise ones.
Can AI company employees read your conversations?
Yes, in practice. All major AI providers reserve the right to review conversations for safety, abuse prevention, and quality assurance. The extent varies by tier and provider, but no major consumer AI platform offers a guarantee that human staff will never read your conversations.
OpenAI states that “a limited number of authorized employees” may access conversations. This access is described as necessary for safety review, policy enforcement, and improving the service. On free-tier accounts, conversations may also be reviewed as part of model training quality assurance processes.
This means passwords pasted on free-tier accounts are potentially accessible not just to OpenAI’s model, but to OpenAI personnel. The same applies to any other sensitive content — source code, client data, internal documents — submitted through consumer interfaces.
Enterprise tiers typically limit human access further and provide contractual commitments through Data Processing Agreements. But the baseline consumer product does not offer these guarantees.
What you should never share with AI tools
The following categories should be treated as off-limits for any AI tool unless your organization has specifically approved a platform with appropriate enterprise-tier protections and a signed DPA:
- Passwords and login credentials of any kind
- API keys, access tokens, and secret keys
- Production database connection strings
- Personally identifiable information about other people — names combined with identifiers like email addresses, ID numbers, or dates of birth
- Health or medical information about individuals
- Financial account numbers or payment card details
- NDA-protected content or confidential commercial information
- Source code from proprietary systems (on consumer tiers)
What you can safely share with AI tools includes synthetic or invented data, public documentation and open-source code, anonymized examples where real identifiers have been replaced with placeholders, and your own non-confidential writing and ideas. The key principle: if you would hesitate to post it publicly on the internet, hesitate equally before pasting it into an AI chat.
What organizations should do
Individual caution is not enough. Employees make mistakes under time pressure, and the consequences of a pasted API key or client dataset reaching a consumer AI platform can be significant. Organizations need structural responses, not just awareness training.
The concrete steps that make a real difference:
- Define a data classification policy. Identify what data is Confidential or Restricted and explicitly prohibit it from all AI tools. Examples: “No data labeled Confidential may be submitted to any external AI platform, regardless of tier.” Make the classification actionable — employees need to know which documents and data types fall in each category.
- Approve enterprise tiers with signed DPAs. If AI tools are valuable to your team — and they are — provision them properly. A ChatGPT Enterprise or Microsoft 365 Copilot account with a signed DPA is not the same risk profile as employees using consumer accounts. Upgrade and control the channel.
- Train employees on what not to paste. Most employees who paste credentials into ChatGPT don’t register it as a risk. A short, specific training that names the categories — API keys, database strings, passwords — and explains why they’re dangerous is more effective than general AI privacy awareness.
- Monitor AI tool usage across your organization. You cannot enforce policies for tools you don’t know your team is using. VetoShield tracks which AI platforms employees access, detects shadow AI adoption, and gives you the visibility to apply policies consistently — without reading anyone’s prompts.