AI Governance Framework Template
A comprehensive framework for establishing AI governance across your organisation. Defines oversight structures, decision-making processes, risk management, and accountability mechanisms to ensure AI is used responsibly and effectively.
Overview
What's included
Governance Charter & Principles
AI Governance Charter
Organisation: Effective date: Version: Next review date:
Purpose
This framework establishes the governance structures, processes, and accountability mechanisms for the responsible development, deployment, and use of AI across [Organisation name].
Scope
This framework applies to:
- All AI and machine learning models developed or deployed by the organisation
- Third-party AI services procured by the organisation
- AI tools used by employees in their work
- Automated decision-making systems that affect customers, employees, or partners
AI Principles
Our organisation commits to the following AI principles:
- Fairness — AI systems will not discriminate against individuals or groups based on protected characteristics
- Transparency — We will be clear about where and how AI is used, especially in decisions that affect people
- Accountability — Every AI system has a named owner who is responsible for its performance and compliance
- Safety — AI systems will be tested, monitored, and maintained to prevent harm
- Privacy — AI will process personal data only as necessary, with appropriate safeguards
- Human oversight — Critical decisions will include meaningful human review
Governance Objectives
- Ensure AI use aligns with organisational values and regulatory requirements
- Establish clear accountability for AI decisions and outcomes
- Manage AI-related risks proportionately
- Enable innovation while maintaining trust
Governance Structure
Governance Structure
AI Governance Committee
Purpose: Provide strategic oversight and approval for AI initiatives Meets: Monthly / Quarterly (select one)
| Role | Name | Responsibility |
|---|---|---|
| Chair (C-suite sponsor) | Strategic direction, escalation decisions | |
| Chief Data/AI Officer | AI strategy, technical standards | |
| Legal / Compliance | Regulatory compliance, data protection | |
| Risk Manager | AI risk assessment, incident review | |
| Business Representative | Use case prioritisation, business impact | |
| HR / People | Employee impact, AI literacy | |
| IT / Security | Infrastructure, security, access controls |
Decision Authority Matrix
| Decision Type | Authority Level |
|---|---|
| New AI use case approval (low risk) | AI Team Lead |
| New AI use case approval (medium risk) | Head of AI + Business Owner |
| New AI use case approval (high risk) | AI Governance Committee |
| Model deployment to production | AI Team Lead + Model Owner |
| AI vendor procurement (< £50k) | Head of AI |
| AI vendor procurement (> £50k) | AI Governance Committee |
| AI incident response | Incident Commander (see escalation process) |
| Policy updates | AI Governance Committee |
AI Risk Classification
AI Risk Classification
Risk Tier Definitions
Tier 1 — Low Risk
- Internal productivity tools (e.g. text summarisation, code assistants)
- No personal data processing
- No autonomous decision-making
- Human always in the loop
- Governance requirement: Self-assessment by project team
Tier 2 — Medium Risk
- Customer-facing AI (e.g. chatbots, recommendation engines)
- Processes personal data
- Assists human decisions but does not make autonomous decisions
- Governance requirement: Review by AI team lead + DPIA
Tier 3 — High Risk
- Autonomous decisions affecting individuals (e.g. credit scoring, hiring screening)
- Processes sensitive personal data
- Regulated use cases (financial services, healthcare, etc.)
- Governance requirement: Full governance committee review + DPIA + ongoing monitoring
Risk Assessment Checklist
For each AI system, answer these questions to determine the risk tier:
- Does it process personal data? (Yes = Tier 2 minimum)
- Does it make autonomous decisions affecting people? (Yes = Tier 3)
- Is it in a regulated industry? (Yes = Tier 3)
- Could errors cause financial harm? (Yes = Tier 2 minimum)
- Could errors cause reputational harm? (Yes = Tier 2 minimum)
- Is it customer-facing? (Yes = Tier 2 minimum)
- Does it use biometric data? (Yes = Tier 3)
System name: Assessed risk tier: Tier Assessed by: Date:
Model Review & Approval Process
Model Review & Approval Process
Pre-Deployment Review Checklist
Before any AI model goes to production, the following must be completed:
Data & Bias
- Training data sources documented and reviewed
- Bias testing completed across protected characteristics
- Data quality assessment passed minimum thresholds
- Data retention and deletion policies defined
Performance
- Model accuracy meets agreed threshold: % minimum
- Performance tested on representative test dataset
- Edge cases and failure modes documented
- Comparison against baseline (rule-based or human) completed
Security
- Input validation implemented (prompt injection, adversarial inputs)
- Access controls defined and implemented
- Data encryption at rest and in transit confirmed
- Security review completed by IT/security team
Compliance
- DPIA completed (if processing personal data)
- Privacy notice updated to reflect AI processing
- Regulatory requirements mapped and addressed
- Explainability approach defined for affected individuals
Operational
- Monitoring and alerting configured
- Rollback procedure documented and tested
- On-call / support responsibilities assigned
- User documentation and training materials prepared
Sign-off
| Role | Name | Approved | Date |
|---|---|---|---|
| Model Owner | Yes / No | ||
| Technical Reviewer | Yes / No | ||
| Compliance | Yes / No | ||
| Business Owner | Yes / No |
Instructions
How to use this template
Adapt the principles to your organisation
Review the six AI principles and modify them to align with your organisation's values, industry, and regulatory context.
Establish the governance committee
Appoint members from the roles listed and set a regular meeting cadence. Start with monthly meetings while the framework is new.
Classify existing AI systems
Inventory all current AI tools and classify each into Tier 1, 2, or 3. Address any Tier 3 systems that lack governance immediately.
Implement the review process
Apply the pre-deployment checklist to new AI systems and retrofit it to existing high-risk systems within 90 days.
Review and evolve
Schedule a quarterly review of the framework. AI regulation and best practice evolve rapidly — your governance must keep pace.
Watch Out
Common mistakes to avoid
FAQ
Frequently asked questions
The risk tiering system in this framework mirrors the EU AI Act's risk-based approach. Tier 3 corresponds to high-risk AI systems under the Act. You should review the specific requirements of the AI Act for your use cases and adjust the framework accordingly.
Not necessarily a formal committee, but you do need clear accountability. In smaller organisations, a single AI lead plus a monthly review with the leadership team can serve the same function.
Create an AI acceptable use policy that covers tools like ChatGPT, Copilot, and other commercial AI services. Combine it with regular training and clear guidelines on what data can be shared with AI tools.
AI governance builds on data governance. You cannot govern AI effectively without well-governed data. Ensure your data governance framework covers data quality, lineage, access, and privacy before layering AI governance on top.
Tier 3 (high-risk) systems should be audited at least annually, with continuous monitoring in between. Tier 2 systems should be reviewed every 6-12 months. Tier 1 systems can be reviewed annually through self-assessment.
Need a custom AI template?
Our team can build tailored templates for your specific business needs. Book a free strategy call.