On This Page

The deployment of AI systems across European organisations now involves compliance with two major regulatory frameworks: the GDPR privacy regulations governing personal information, and the EU AI Act establishing AI system governance. Understanding how these requirements interact, and addressing them systematically, protects your organisation from penalties reaching 4% of global annual turnover while building the operational maturity that customers and partners expect.

This guide is aimed at CISOs, privacy officers, and legal teams responsible for overseeing AI projects. It does not replace legal advice, but it provides the orientation you need to make well-founded decisions. For broader strategic context, see our AI advisory.


GDPR Requirements for AI Systems

The GDPR privacy regulations contain no AI-specific provisions, but its core principles apply directly, often going unnoticed until a problem surfaces.

Automated Decision-Making (Art. 22)

When an AI system makes decisions with legal or similarly significant effect in a fully automated way, Art. 22 GDPR applies. Affected individuals have the right to have a natural person review the decision. Typical cases: credit scoring, automated candidate screening, risk assessment in insurance. Exceptions exist (contract fulfilment, consent), but they require additional safeguards.

Minimisation and Purpose Limitation (Art. 5)

AI models are trained on large datasets, often records collected for a different purpose. Art. 5(1)(b) requires purpose limitation. Training on historical customer information for a new use case is unlawful without a separate legal basis. Art. 5(1)(c) requires minimisation: only the personal details actually necessary for the specific purpose may be processed.

Privacy by Design and Privacy by Default (Art. 25)

AI systems must be configured to operate in a privacy-friendly manner by default. This means no unnecessary storage, no broad access rights as a default setting, and technical pseudonymisation measures built into the architecture phase, not added afterwards.

Implementing these requirements is part of every serious AI compliance programme.


EU AI Act and GDPR

Since August 2024, the EU AI Act has been in force. This regulatory framework operates in parallel with the GDPR, creating both significant overlaps and independent obligations.

Risk Classes and Their Consequences

The AI Act distinguishes four risk classes. High-risk AI systems (Annex III) are subject to the strictest requirements: conformity assessment, technical documentation, human oversight, and registration in an EU database. High-risk covers systems in areas including critical infrastructure, education, employment, law enforcement, and credit provision.

Timeline of Obligations

Prohibited AI practices (Art. 5)100%
GPAI model requirements75%
High-risk: new systems50%
High-risk: existing systems25%
  • February 2025: Prohibited AI practices apply (Art. 5)
  • August 2025: Requirements for GPAI models (General Purpose AI) take effect
  • August 2026: High-risk requirements fully applicable to new systems
  • August 2027: High-risk requirements also applicable to existing systems

Interaction with the GDPR

Both frameworks require transparency, documentation, and risk assessment, but not in identical terms. A DPIA under Art. 35 GDPR does not automatically satisfy the conformity assessment under the AI Act. Organisations must coordinate both processes to avoid duplication and close gaps. An integrated AI strategy helps align these obligations from the outset.


7 Practical Steps for GDPR-Compliant AI

1. Conduct a Data Protection Impact Assessment (DPIA) Early

Art. 35 GDPR mandates a DPIA when AI systems are likely to generate a high risk for affected individuals. A DPIA is not a form; it is an analysis and governance process. It must be completed before deployment, not after. Involve your privacy officer in the design phase, not at final sign-off.

2. Document Purpose Limitation and Minimisation

For every AI system, set out in writing: which records are processed, for what purpose, on what legal basis? Verify that training inputs, inference inputs, and outputs each have a valid legal basis. Synthetic datasets or anonymisation can help resolve purpose limitation issues at the training stage.

3. Structure Data Processing Agreements (DPAs) Properly

When using external AI providers (SaaS, API, cloud models), these are typically processors under Art. 28 GDPR, which requires a written Processing Agreement. Check whether the provider also uses your information to train its own models. If so, there is no processor relationship. Instead, there is independent processing, which requires its own legal basis or must be contractually excluded.

4. Meet Transparency Obligations

Affected individuals must be informed under Art. 13 and 14 GDPR about the use of AI, particularly about automated decision-making (Art. 22(2)(c)). Update your privacy notices. Ensure the disclosures are genuinely understandable, not just legally complete.

5. Create a Deletion Concept for AI Records

Personal information in training datasets, model weights, and logs must be deletable. This is technically demanding. Machine unlearning is not yet a solved problem. Clarify with your vendor how deletion requests under Art. 17 GDPR will be handled, ideally before you deploy a system in production.

6. Adapt Technical and Organisational Measures (TOMs)

Your existing TOMs under Art. 32 GDPR may not cover AI-specific risks: model poisoning, prompt injection, membership inference attacks, inadvertent reproduction of personal details by language models. Supplement your TOMs with AI-relevant security measures following best practices for cloud-based database security and document them in your processing records.

7. Fulfil Documentation and Accountability Obligations Systematically

The accountability principle (Art. 5(2) GDPR) requires that you can demonstrate compliance with all principles. For AI, this means documentation of model selection, training inputs, DPIA outcomes, processing agreements, and internal approval processes. Structure this documentation so it can be presented to a supervisory authority on request. Our AI compliance section provides an overview of frameworks that structure this evidence.


On-Premise vs. Cloud: A Data Protection Perspective

For organisations operating under GDPR, the question of where personal information is processed is not a technical preference; it is a compliance decision.

Sovereignty and Third-Country Transfers

Using AI APIs from US providers (OpenAI, Google, AWS Bedrock, Microsoft Azure) is generally possible, but requires due diligence. Since the EU-US Data Privacy Framework (2023), transfer conditions have eased where the provider is certified. Nevertheless, a residual risk remains. US authorities can access records from US-based providers under the CLOUD Act, even when those records sit in European centres. Understanding data sovereignty requirements helps you evaluate this risk in your specific context.

On-Premise and European Alternatives

Open-source models (Llama, Mistral, Qwen) can be run locally or in European centres. This eliminates third-country transfer risks and provides full control over both the information processed and model behaviour. The operational overhead is higher, but the compliance position is substantially clearer.

Recommendation for Sensitive Data Categories

For special categories of personal information (Art. 9 GDPR: health records, biometric identifiers, political opinions), you should default to on-premise or certified European providers, such as those holding a BSI C5 attestation or an equivalent ISO 27001-based certification. Demonstrating compliance to supervisory authorities is considerably simpler in those cases.


Common Compliance Mistakes

DPA with vendor not signed or not reviewed

Many organisations either do not sign DPAs with AI vendors or accept standard contracts without checking whether the vendor uses customer information for its own purposes. Some vendors include AI training with customer records explicitly in their terms, which constitutes independent processing, not a processor relationship.

DPIA conducted too late or not at all

The DPIA is frequently treated as a closing document filled out after implementation. Legally, it must be completed before processing begins. A retrospective DPIA is not genuine protection; it is documentation without any governance effect.

Purpose limitation violated during fine-tuning

Organisations use customer records to train or fine-tune their own models without checking whether the original consent or legal basis covers that purpose. Information collected under a customer service contract cannot automatically be used to train a chatbot.

No process for individual rights in AI systems

Access, correction, and deletion requests catch organisations unprepared when AI systems handle personal information in non-transparent ways. Ensure your rights-fulfilment processes also cover AI-processed records.

Model outputs treated as non-personal

Outputs of AI systems (e.g. generated profiles, scores, recommendations) can constitute personal information, even when they are not directly read from stored records. Privacy law applies to outputs, not only to inputs.


See our AI compliance overview for legal requirements, AI strategy guide for developing a governance framework, and AI in regulated industries for sector-specific considerations.