AI Data Protection Impact Assessment Template
A GDPR-compliant Data Protection Impact Assessment (DPIA) template specifically designed for AI systems. Covers personal data mapping, necessity and proportionality assessment, risk identification, and mitigation measures. Required for AI systems that process personal data at scale or make automated decisions.
Overview
What's included
DPIA Scope & Data Mapping
DPIA Scope & Data Mapping
AI system name: DPIA author: DPO consulted: Yes / No Date: Version:
Why Is a DPIA Required?
A DPIA is required under GDPR Article 35 when processing is likely to result in high risk. Check all that apply:
- Automated decision-making with legal or significant effects (Art. 22)
- Large-scale processing of special category data
- Systematic monitoring of individuals
- Use of new technologies (AI/ML)
- Profiling with significant effects
- Processing that could prevent individuals from exercising rights
Personal Data Processed
| Data Category | Examples | Source | Volume | Special Category? |
|---|---|---|---|---|
| records | Yes/No | |||
| records | Yes/No | |||
| records | Yes/No |
Data Subjects
| Group | Approximate Number | Relationship |
|---|---|---|
| Customers / Employees / Public / | ||
Data Flow
- Personal data is collected from:
- Data is sent to: (AI system/provider)
- Data is processed by: (model/algorithm)
- Outputs are: (stored/displayed/used for decisions)
- Data is retained for: then deleted/anonymised
Third Parties
| Party | Role | Location | DPA in Place? |
|---|---|---|---|
| AI provider / processor | Yes/No | ||
| Cloud hosting / sub-processor | Yes/No |
Necessity & Proportionality
Necessity & Proportionality Assessment
Lawful Basis for Processing
| Processing Activity | Lawful Basis (Art. 6) | Justification |
|---|---|---|
| Consent / Contract / Legitimate interest / Legal obligation / | ||
If relying on legitimate interest, complete the balancing test:
- Our legitimate interest:
- Impact on individuals:
- Balancing conclusion: Interest outweighs impact / Impact outweighs interest
Necessity Assessment
For each data element, can the purpose be achieved with less personal data?
| Data Element | Necessary? | Can It Be Anonymised/Pseudonymised? | Action |
|---|---|---|---|
| Yes/No | Yes/No | ||
| Yes/No | Yes/No | ||
| Yes/No | Yes/No |
Data Minimisation Measures
- Only collecting data that is strictly necessary for the AI purpose
- Pseudonymising personal data before AI processing where possible
- Using aggregated/anonymised data for model training where possible
- Implementing data retention limits: delete after days
- Restricting access to personal data to authorised personnel only
Automated Decision-Making (Article 22)
Does this AI system make automated decisions with legal or significant effects?
- Yes — safeguards required:
- Right to human review of automated decisions
- Right to express point of view
- Right to contest the decision
- Meaningful information about the logic involved
- No — decisions are AI-assisted with human final decision
Risk Assessment & Mitigation
Risk Assessment & Mitigation
Risks to Data Subject Rights
| # | Risk | GDPR Right Affected | Likelihood (1-5) | Severity (1-5) | Score | Mitigation Measure |
|---|---|---|---|---|---|---|
| 1 | Inaccurate AI output used in decisions about individuals | Right to accuracy | Human review of all decisions; correction mechanism | |||
| 2 | Individuals unaware AI is processing their data | Right to information | Update privacy notice; transparency statement | |||
| 3 | Cannot explain AI decision to individual | Right to explanation | Implement explainability; feature importance logging | |||
| 4 | Personal data leaked through AI outputs | Right to security | Output filtering; PII detection; access controls | |||
| 5 | AI bias leading to unfair treatment | Right to non-discrimination | Bias testing; fairness monitoring; human oversight | |||
| 6 | Inability to delete data from trained model | Right to erasure | Document limitation; anonymise training data; retrain if needed | |||
| 7 | Vendor uses data for unauthorised purposes | All rights | Contractual prohibition; audit rights; DPA | |||
| 8 | Data breach at AI provider | Right to security | Encryption; breach notification clause; incident response plan |
Residual Risk Assessment
After applying mitigation measures, is the residual risk acceptable?
- Yes — residual risks are low and acceptable
- No — further mitigation needed before proceeding
- Consult ICO — risks remain high; prior consultation may be required under Art. 36
Sign-Off
| Role | Name | Approved | Date |
|---|---|---|---|
| DPIA Author | |||
| Data Protection Officer | |||
| System Owner | |||
| Information Security |
Review Schedule
This DPIA will be reviewed:
- Every months
- When the AI system or data processing changes materially
- When a relevant data protection incident occurs
- When guidance from the ICO or EDPB changes
Instructions
How to use this template
Determine if a DPIA is required
Use the trigger checklist. If any item applies, a DPIA is mandatory. When in doubt, conduct one — it is good practice regardless.
Map all personal data flows
Document every piece of personal data that enters, is processed by, and exits the AI system. Include third-party processors.
Assess necessity and proportionality
For each data element, ask: is this strictly necessary? Can the same outcome be achieved with less data or anonymised data?
Identify and mitigate risks
Consider risks to each GDPR right. Define specific mitigation measures and assign owners.
Obtain DPO sign-off
Your Data Protection Officer must review and sign off on the DPIA before the AI system goes live. If residual risks are high, consult the ICO.
Watch Out
Common mistakes to avoid
FAQ
Frequently asked questions
No, but it is required when AI processing is likely to result in high risk to individuals. In practice, most AI systems that process personal data will trigger a DPIA requirement. The ICO recommends conducting a DPIA for any new technology processing.
The data controller (your organisation) is responsible. In practice, the project team writes the DPIA with input from the DPO, legal, security, and the AI development team. The DPO advises and reviews but does not conduct it.
If risks remain high after mitigation, you must consult your supervisory authority (e.g. the ICO in the UK) under GDPR Article 36 before proceeding. They will advise on whether additional measures are needed.
The EU AI Act introduces additional requirements for high-risk AI systems, including conformity assessments. A DPIA covers GDPR requirements; you may also need to conduct an AI Act impact assessment for high-risk use cases. The two assessments can be conducted together.
Only with a valid lawful basis under GDPR. Consent is one option but must be specific and informed. Legitimate interest may apply but requires a thorough balancing test. Anonymising data before training avoids GDPR requirements entirely and is the preferred approach.
Need a custom AI template?
Our team can build tailored templates for your specific business needs. Book a free strategy call.