GroveAI
BusinessFree Template

AI Data Protection Impact Assessment Template

A GDPR-compliant Data Protection Impact Assessment (DPIA) template specifically designed for AI systems. Covers personal data mapping, necessity and proportionality assessment, risk identification, and mitigation measures. Required for AI systems that process personal data at scale or make automated decisions.

Overview

What's included

DPIA trigger assessment checklist
Personal data mapping for AI processing
Lawful basis and necessity analysis
Risk identification across GDPR rights
Mitigation measures and controls
DPO sign-off and review schedule
1

DPIA Scope & Data Mapping

DPIA Scope & Data Mapping

AI system name:   DPIA author:   DPO consulted: Yes / No Date:   Version:  

Why Is a DPIA Required?

A DPIA is required under GDPR Article 35 when processing is likely to result in high risk. Check all that apply:

  • Automated decision-making with legal or significant effects (Art. 22)
  • Large-scale processing of special category data
  • Systematic monitoring of individuals
  • Use of new technologies (AI/ML)
  • Profiling with significant effects
  • Processing that could prevent individuals from exercising rights

Personal Data Processed

Data CategoryExamplesSourceVolumeSpecial Category?
     recordsYes/No
     recordsYes/No
     recordsYes/No

Data Subjects

GroupApproximate NumberRelationship
  Customers / Employees / Public /  
   

Data Flow

  1. Personal data is collected from:  
  2. Data is sent to:   (AI system/provider)
  3. Data is processed by:   (model/algorithm)
  4. Outputs are:   (stored/displayed/used for decisions)
  5. Data is retained for:   then deleted/anonymised

Third Parties

PartyRoleLocationDPA in Place?
 AI provider / processor Yes/No
 Cloud hosting / sub-processor Yes/No
2

Necessity & Proportionality

11 itemsto complete

Necessity & Proportionality Assessment

Lawful Basis for Processing

Processing ActivityLawful Basis (Art. 6)Justification
 Consent / Contract / Legitimate interest / Legal obligation /   
   

If relying on legitimate interest, complete the balancing test:

  • Our legitimate interest:  
  • Impact on individuals:  
  • Balancing conclusion: Interest outweighs impact / Impact outweighs interest

Necessity Assessment

For each data element, can the purpose be achieved with less personal data?

Data ElementNecessary?Can It Be Anonymised/Pseudonymised?Action
 Yes/NoYes/No 
 Yes/NoYes/No 
 Yes/NoYes/No 

Data Minimisation Measures

  • Only collecting data that is strictly necessary for the AI purpose
  • Pseudonymising personal data before AI processing where possible
  • Using aggregated/anonymised data for model training where possible
  • Implementing data retention limits: delete after   days
  • Restricting access to personal data to authorised personnel only

Automated Decision-Making (Article 22)

Does this AI system make automated decisions with legal or significant effects?

  • Yes — safeguards required:
    • Right to human review of automated decisions
    • Right to express point of view
    • Right to contest the decision
    • Meaningful information about the logic involved
  • No — decisions are AI-assisted with human final decision
3

Risk Assessment & Mitigation

Risk Assessment & Mitigation

Risks to Data Subject Rights

#RiskGDPR Right AffectedLikelihood (1-5)Severity (1-5)ScoreMitigation Measure
1Inaccurate AI output used in decisions about individualsRight to accuracy   Human review of all decisions; correction mechanism
2Individuals unaware AI is processing their dataRight to information   Update privacy notice; transparency statement
3Cannot explain AI decision to individualRight to explanation   Implement explainability; feature importance logging
4Personal data leaked through AI outputsRight to security   Output filtering; PII detection; access controls
5AI bias leading to unfair treatmentRight to non-discrimination   Bias testing; fairness monitoring; human oversight
6Inability to delete data from trained modelRight to erasure   Document limitation; anonymise training data; retrain if needed
7Vendor uses data for unauthorised purposesAll rights   Contractual prohibition; audit rights; DPA
8Data breach at AI providerRight to security   Encryption; breach notification clause; incident response plan

Residual Risk Assessment

After applying mitigation measures, is the residual risk acceptable?

  • Yes — residual risks are low and acceptable
  • No — further mitigation needed before proceeding
  • Consult ICO — risks remain high; prior consultation may be required under Art. 36

Sign-Off

RoleNameApprovedDate
DPIA Author   
Data Protection Officer   
System Owner   
Information Security   

Review Schedule

This DPIA will be reviewed:

  • Every   months
  • When the AI system or data processing changes materially
  • When a relevant data protection incident occurs
  • When guidance from the ICO or EDPB changes

Instructions

How to use this template

1

Determine if a DPIA is required

Use the trigger checklist. If any item applies, a DPIA is mandatory. When in doubt, conduct one — it is good practice regardless.

2

Map all personal data flows

Document every piece of personal data that enters, is processed by, and exits the AI system. Include third-party processors.

3

Assess necessity and proportionality

For each data element, ask: is this strictly necessary? Can the same outcome be achieved with less data or anonymised data?

4

Identify and mitigate risks

Consider risks to each GDPR right. Define specific mitigation measures and assign owners.

5

Obtain DPO sign-off

Your Data Protection Officer must review and sign off on the DPIA before the AI system goes live. If residual risks are high, consult the ICO.

Watch Out

Common mistakes to avoid

Treating the DPIA as a one-off compliance exercise — it should be a living document reviewed when processing changes.
Not involving the DPO early enough — consult them during design, not just before launch.
Underestimating AI-specific risks — bias, explainability, and data retention in models are unique to AI systems.
Forgetting about training data — if personal data was used to train or fine-tune a model, that processing needs to be covered.

FAQ

Frequently asked questions

No, but it is required when AI processing is likely to result in high risk to individuals. In practice, most AI systems that process personal data will trigger a DPIA requirement. The ICO recommends conducting a DPIA for any new technology processing.

The data controller (your organisation) is responsible. In practice, the project team writes the DPIA with input from the DPO, legal, security, and the AI development team. The DPO advises and reviews but does not conduct it.

If risks remain high after mitigation, you must consult your supervisory authority (e.g. the ICO in the UK) under GDPR Article 36 before proceeding. They will advise on whether additional measures are needed.

The EU AI Act introduces additional requirements for high-risk AI systems, including conformity assessments. A DPIA covers GDPR requirements; you may also need to conduct an AI Act impact assessment for high-risk use cases. The two assessments can be conducted together.

Only with a valid lawful basis under GDPR. Consent is one option but must be specific and informed. Legitimate interest may apply but requires a thorough balancing test. Anonymising data before training avoids GDPR requirements entirely and is the preferred approach.

Need a custom AI template?

Our team can build tailored templates for your specific business needs. Book a free strategy call.