AI Privacy Basics

How to Opt Out of AI Training: Step-by-Step for Every Major Platform

Key Takeaways
  • Opting out of AI training stops your conversations from being used to improve future model versions
  • Each platform has a different location for this setting — none of them make it obvious
  • ChatGPT Free/Plus: Settings → Data Controls → "Improve the model for everyone" toggle
  • Google Gemini: myaccount.google.com → Data & Privacy → turn off Gemini Apps Activity
  • Opting out as an individual is not a substitute for organization-wide policy — employees who haven't opted out remain a risk
Definition

Opting out of AI training means telling the platform not to use your conversations as training data for future model improvements. It does not delete existing data — it only affects future conversations. Most platforms implement a 30-day grace period before applying the opt-out.

What opting out actually does (and doesn’t do)

When you opt out of AI training, you are telling the platform to exclude your future conversations from the data used to train and improve its models. This is a meaningful protection — your prompts won’t be fed into the next training run, and they won’t be reviewed by human annotators evaluating model quality.

What opting out does not do is equally important to understand:

  • It doesn’t delete data already ingested. Conversations processed before your opt-out may still be in training pipelines. Platforms are not obliged to retroactively remove your data from completed training runs.
  • It doesn’t prevent data retention. Most platforms still store your conversations for safety, abuse detection, and legal compliance purposes, regardless of your training preference.
  • It doesn’t prevent staff access. Trust and safety teams at most AI companies can still review conversations flagged for policy violations, even after you opt out of training.
  • It doesn’t affect current responses. Training and inference are separate. Opting out sets you back to “inference only” — the model responds to you using its existing weights, without learning from your future sessions.

With that framing in place, here’s how to actually do it on each major platform.

ChatGPT (Free & Plus)

ChatGPT free accounts are opted into training data use by default. The opt-out is buried in settings and, as a side effect, also disables your chat history.

  1. Click your profile icon in the bottom-left corner of the ChatGPT interface
  2. Select Settings
  3. Navigate to Data Controls
  4. Toggle “Improve the model for everyone” to OFF

Note that disabling this toggle also turns off your chat history — your previous conversations will no longer be visible in the sidebar, and new conversations won’t be saved. This is a deliberate design decision by OpenAI, not a bug.

ChatGPT Team and Enterprise accounts do not train on tenant data by default. If your organization has provisioned a Team or Enterprise account, individual employees don’t need to opt out — the protection applies at the account level automatically.

Google Gemini

Google Gemini’s training data controls are managed through your Google Account, not through the Gemini interface itself. The setting is called “Gemini Apps Activity.”

  1. Go to myaccount.google.com
  2. Select Data & Privacy from the left navigation
  3. Scroll to “History settings”
  4. Click “Gemini Apps Activity”
  5. Turn it off

When Gemini Apps Activity is disabled, Google will no longer save your Gemini conversations to your account or use them to improve Google’s AI models. Note that turning off activity also means Gemini won’t have memory of previous conversations.

Google Workspace accounts (work or school accounts managed by an administrator) may have this setting locked or managed centrally. If you cannot change the setting, your organization’s admin controls it — contact your IT team.

Microsoft Copilot (free consumer)

The free consumer version of Microsoft Copilot — available at copilot.microsoft.com — has a separate opt-out path from Microsoft 365 Copilot for enterprise. These are different products with different privacy defaults.

  1. Go to privacy.microsoft.com
  2. Sign in with your Microsoft account
  3. Open the Privacy dashboard
  4. Find and select “AI model training opt-out”

Microsoft 365 Copilot for enterprise (the version organizations pay for as an add-on to Microsoft 365) does not train on tenant data by default. Microsoft has committed that enterprise customer data is not used to train shared foundation models. The opt-out process above applies to the free consumer product only.

Claude (Anthropic)

Claude.ai free and Pro plans may use conversations to improve Anthropic’s models, subject to privacy settings. The opt-out is available directly in the Claude.ai interface.

  1. Go to claude.ai and sign in
  2. Click your profile / account menu
  3. Select Settings
  4. Go to Privacy
  5. Toggle off training use for your conversations

API users have a different default: conversations sent through the Anthropic API are not used for training by default. If you are using Claude through a third-party application built on the API, check that application’s privacy policy — your data may be governed by the developer’s terms, not Anthropic’s directly.

Perplexity AI

Perplexity allows users to opt out of AI training through their account settings.

  1. Go to perplexity.ai and sign in
  2. Click your profile and open Settings
  3. Navigate to Account
  4. Find “AI Training”
  5. Toggle it off

As with other platforms, opting out here affects future conversations only. Previously submitted queries may have already been processed as training data.

GitHub Copilot

GitHub Copilot collects code snippets and suggestions to improve the product. The opt-out location depends on whether you are using an individual or enterprise plan.

Individual plans:

  1. Go to github.com and sign in
  2. Navigate to your account Settings
  3. Select Copilot from the left sidebar
  4. Under “Suggestions matching public code,” review and adjust sharing preferences
  5. Disable “Allow GitHub to use my code snippets for product improvements”

Enterprise plans: Training data opt-outs for GitHub Copilot Business and Enterprise are managed at the organization level by administrators, not by individual users. Administrators can configure whether GitHub is allowed to use code from their organization to improve the model. If you are on an enterprise plan, contact your organization’s GitHub admin.

Why individual opt-outs aren’t enough for organizations

Going through this list and opting out on your own accounts is a sensible step. But for organizations, individual opt-outs have a fundamental limitation: you can control your own settings, not your colleagues’.

Every employee who uses a personal free account — or who simply hasn’t gone through these steps — remains a potential data exposure vector. The risk isn’t concentrated in a single person who made a bad decision; it’s distributed across every person who uses any AI tool without having changed the default settings.

The math is straightforward. If 10% of your team has opted out, 90% haven’t. The confidential information those 90% type into their AI tools is still fair game for training. An individual opt-out doesn’t change the organization’s risk profile — it only removes that one individual from the exposure.

The systemic solution is organizational AI governance: approved business-tier accounts (where training opt-out is the default at the organizational level), clear policies on which tools employees may use, and visibility into which AI tools are actually in use across the organization.

For EU organizations subject to the EU AI Act, individual opt-outs don’t satisfy your documentation and oversight obligations. You need to know which tools are in use, not just hope each employee has changed their privacy settings. The EU AI Act requires deployers — which includes organizations whose employees use AI tools at work — to maintain oversight, document usage, and be able to demonstrate compliance. That requires systemic controls, not individual settings changes.

Frequently asked questions

No. Opting out only prevents future conversations from being used for training. Data already processed may remain in training pipelines. To request deletion of stored data, you need to submit a separate data deletion request under GDPR Article 17.
No. Opting out affects training (how the model improves over time), not inference (how it responds to you now). Your current experience is unchanged.
There's no real-time confirmation. Most platforms apply opt-outs within 30 days. The only way to verify is to re-check your settings periodically — platforms sometimes reset them after updates.
Not directly via platform settings — opt-outs are per-user. However, employers can require employees to use approved business-tier accounts (Team, Enterprise) which have organizational-level controls that supersede individual settings.