- No major AI platform is fully GDPR compliant for all use cases — compliance depends heavily on how the tool is used and which tier
- Consumer tiers (free ChatGPT, personal Gemini) are the most problematic: no DPA, broad training data use, no organizational controls
- Enterprise and business tiers with a signed Data Processing Agreement (DPA) come closest to GDPR compliance for organizational use
- GDPR requires a lawful basis for processing — most AI providers rely on "legitimate interests" for consumer products, which is contested for training data use
- In 2023–2024, Italian, Spanish, and Polish data protection authorities all investigated or temporarily banned ChatGPT over GDPR concerns
Under GDPR, any time personal data is processed — collected, stored, transmitted, or used — the processor must have a lawful basis, maintain records of processing activities, respect data subject rights (access, correction, erasure, portability), and ensure appropriate safeguards for cross-border data transfers. Applying these requirements to AI tools reveals significant gaps, especially at the consumer tier.
Key GDPR requirements relevant to AI tools
GDPR imposes a specific set of obligations on anyone processing personal data. When employees use AI tools at work, several of these obligations are triggered simultaneously — and most consumer AI products were not designed with them in mind.
The requirements most relevant to AI tool use are:
- Lawful basis for processing (Art. 6) — every processing activity needs a legal justification: consent, contract, legal obligation, vital interests, public task, or legitimate interests.
- Special categories of data (Art. 9) — health, biometric, political, religious, and other sensitive data face stricter rules and generally require explicit consent or a specific exemption.
- Data subject rights (Art. 15–22) — individuals have the right to access their data, correct it, request erasure, obtain a portable copy, and restrict or object to processing. Organizations must be able to fulfil these requests — which requires knowing that AI processing occurred in the first place.
- Data Processing Agreements (Art. 28) — when personal data is shared with a third-party processor (like an AI provider), a written DPA is mandatory. It must specify the subject matter, duration, nature, and purpose of processing.
- Cross-border transfers (Art. 44–49) — transferring personal data outside the EU requires either an adequacy decision, Standard Contractual Clauses (SCCs), or another approved mechanism. Most major AI providers are US-based.
- Storage limitation (Art. 5(1)(e)) — data should not be kept longer than necessary for its purpose. AI platforms typically retain conversation data for extended periods.
- Purpose limitation (Art. 5(1)(b)) — data collected for one purpose cannot be repurposed. Using customer data submitted to an AI tool for model training is a purpose limitation concern.
Taken together, these requirements create a demanding compliance framework that most consumer AI tools do not satisfy — and that even enterprise tiers require careful configuration to meet.
The lawful basis problem
Every instance of personal data processing under GDPR needs a lawful basis. For consumer AI platforms, this is where significant legal uncertainty exists.
Most providers rely on legitimate interests (Art. 6(1)(f)) for their consumer products — arguing that processing data to improve their services is a legitimate interest that outweighs users’ privacy rights. This basis is permissible but requires a balancing test, and regulators have increasingly challenged it for AI training use cases.
Italy’s data protection authority (the Garante) made this explicit in 2023. It found that ChatGPT lacked a sufficient lawful basis for processing Italian users’ personal data and imposed a temporary ban. OpenAI was required to implement changes and provide clearer information before service was restored. The investigation set a precedent that other EU regulators have since followed.
Consent as a lawful basis has its own problems. GDPR requires consent to be freely given, specific, informed, and withdrawable. For AI training, consent must be specific to that purpose — a general terms-of-service agreement is not sufficient. And withdrawal of consent must be as easy as giving it, which is hard to reconcile with data already incorporated into model training.
For organizations, the issue compounds. An organization processing employee or customer data through an AI tool needs its own lawful basis analysis — separate from the AI provider’s. If a company pastes customer information into ChatGPT to draft a response, the company is processing that data, and needs a lawful basis for sharing it with OpenAI. A signed DPA with OpenAI is necessary but not sufficient — the organization also needs to justify the processing under its own data controller obligations.
How major platforms handle GDPR
GDPR compliance varies substantially across platforms and — critically — across tiers within the same platform. The difference between a consumer account and an enterprise account at the same provider can be the difference between GDPR-compliant and non-compliant use.
| Platform | DPA available? | Lawful basis for training | EU data residency? | Right to erasure | Regulator incidents |
|---|---|---|---|---|---|
| ChatGPT Free | No | Legitimate interests (contested) | No (US servers) | Via privacy portal | Italy ban (2023), Spain/Poland investigations |
| ChatGPT Enterprise | Yes | DPA/contract | EU region option | Via DPA | None at enterprise tier |
| Google Gemini (personal) | No | Legitimate interests | EU option in Workspace | Via Google account | France investigation (2024) |
| Google Workspace Gemini | Yes | DPA/contract | EU | Via Google Workspace | None at Workspace tier |
| Microsoft Copilot (free) | No | Legitimate interests | US servers | Via Microsoft privacy portal | Under review |
| Microsoft 365 Copilot | Yes | DPA/M365 agreement | EU region option | Via M365 admin | None at M365 tier |
| Claude.ai (free/Pro) | No | Legitimate interests | US servers | Via privacy portal | No major incidents |
| Anthropic API | Yes | DPA available | EU region option | Via DPA | None at API tier |
The pattern is consistent: consumer tiers lack DPAs, rely on contested lawful bases, and have attracted regulatory scrutiny. Business and enterprise tiers, with signed DPAs and organizational controls, provide a much more defensible compliance position — though they still require the organization to configure them correctly and document their own processing activities.
EU AI Act note: GDPR compliance and EU AI Act compliance are separate but related obligations. GDPR governs data protection; the EU AI Act governs AI system risk and oversight. Organizations need both frameworks in place — not just one.
Cross-border data transfers
Most major AI providers are headquartered in the United States. Transferring personal data to the US — which happens whenever an EU employee sends a prompt containing personal data to a US-based AI service — requires a valid transfer mechanism under GDPR Chapter V.
Post-Schrems II (the 2020 Court of Justice ruling that invalidated the Privacy Shield), the primary mechanisms for US transfers are:
- Standard Contractual Clauses (SCCs) — contractual clauses approved by the European Commission. All major AI providers have adopted SCCs as part of their data processing terms.
- EU-US Data Privacy Framework (DPF) — the replacement for Privacy Shield, adopted in 2023. Several major US providers have self-certified under the DPF, which provides an adequacy-equivalent transfer mechanism.
However, SCCs alone are not sufficient. Organizations are also required to conduct a Transfer Impact Assessment (TIA) — an evaluation of whether the legal framework in the destination country (the US, in most cases) provides an essentially equivalent level of protection to GDPR. Most organizations that use AI tools have not completed TIAs, which means their transfer mechanism is technically incomplete.
What organizations should do:
- Verify that your AI provider has adopted current SCCs (Module 2: controller-to-processor) as part of their DPA
- Request TIA documentation from your provider — most enterprise agreements include this
- Use EU data residency options where they are available, as this eliminates the transfer issue entirely
- Document the transfer mechanism and TIA in your Record of Processing Activities (ROPA)
Data subject rights in practice
GDPR gives individuals specific rights over their personal data, and organizations must be able to respond to Data Subject Access Requests (DSARs). For AI tool usage, this creates practical challenges.
Access requests (Art. 15) — most platforms provide a way to download conversation history. However, if an employee has submitted a customer’s personal data to an AI tool, the customer may have the right to see what was submitted and how it was used. The organization needs to be able to retrieve and produce this information.
Right to erasure (Art. 17) — platforms accept deletion requests and typically honour them within 30 days. However, a safety hold of up to 30 days is standard during which the data remains accessible. More problematically, if data has been used in model training, erasure is technically very difficult — models cannot easily “forget” information incorporated during training. Most providers acknowledge this limitation in their privacy documentation.
Data portability (Art. 20) — conversation exports in JSON or plain text are available from all major platforms. This satisfies the portability requirement for the individual’s own data.
Restriction (Art. 18) — support for restricting processing while a DSAR is investigated is limited across most consumer platforms and requires enterprise-tier agreements to implement properly.
The fundamental challenge for organizations is this: to respond to a DSAR covering AI-processed data, the organization must first know that the AI processing happened. Consumer-tier AI usage leaves no organizational audit trail — which makes DSAR compliance practically impossible for unmonitored shadow AI use.
What GDPR-compliant AI use looks like for organizations
There is no single configuration that guarantees GDPR compliance for all AI tool use, but organizations that take the following steps are in a substantially more defensible position:
- Use business or enterprise tiers with signed DPAs. Free and personal consumer tiers lack the contractual framework GDPR Article 28 requires. A signed DPA is the minimum starting point for any organizational AI processing of personal data.
- Document AI processing activities in your ROPA. Your Record of Processing Activities must include AI tool processing: the categories of data, the purpose, the legal basis, the retention period, and the transfer mechanism. Most organizations’ ROPAs don’t mention AI tools at all.
- Run a DPIA for high-risk AI use cases. A Data Protection Impact Assessment is required where processing is likely to result in a high risk to individuals — systematic monitoring, processing of special categories, or large-scale profiling. Many AI applications trigger this threshold.
- Restrict the categories of data employees may submit to AI tools. A data classification policy that prohibits submitting personal data of customers, employees, or third parties to AI tools (except under specific, documented conditions) reduces exposure significantly.
- Have a DSAR response process that covers AI-processed data. Your process for responding to data subject rights requests must account for data that may have been submitted to AI tools. This requires either monitoring AI usage or restricting it to documented, auditable channels.