The EU AI Act came into force in August 2024, with its provisions being phased in through 2025 and 2026. It is the world's first comprehensive AI legislation, and its impact extends well beyond EU borders. For UK businesses that trade with the EU, serve EU customers, or develop AI products used in the EU market, the Act creates real compliance obligations that cannot be ignored.
This guide breaks down the key provisions, explains the risk-based classification system, and sets out practical steps UK businesses should take now.
Key Provisions of the EU AI Act
The Act takes a risk-based approach to AI regulation. Rather than regulating all AI systems equally, it categorises them by the risk they pose to health, safety, and fundamental rights. The higher the risk, the stricter the requirements.
Several key principles underpin the Act:
- Transparency: Users must be informed when they are interacting with an AI system. AI-generated content, including deepfakes, must be labelled.
- Human oversight: High-risk AI systems must be designed so that humans can effectively oversee their operation and intervene when necessary.
- Data governance: Training data for high-risk AI systems must meet quality standards, including measures to identify and mitigate bias.
- Technical documentation: Providers of high-risk AI systems must maintain detailed documentation covering the system's design, capabilities, limitations, and testing.
- Conformity assessment: High-risk AI systems must undergo assessment before being placed on the market, similar to CE marking for physical products.
Penalties for non-compliance are significant: up to €35 million or 7% of global annual turnover for the most serious violations.
The Risk Categories Explained
The Act defines four risk tiers, each with different requirements:
- Unacceptable risk (banned): AI systems that pose a clear threat to people's safety or rights. This includes social scoring by governments, real-time biometric identification in public spaces (with limited exceptions), and AI that manipulates human behaviour to cause harm. These systems are prohibited outright.
- High risk: AI systems used in critical areas such as recruitment, credit scoring, education, law enforcement, and critical infrastructure. These systems must meet stringent requirements including risk management, data governance, technical documentation, human oversight, accuracy, and robustness.
- Limited risk: AI systems with specific transparency obligations. Chatbots must disclose that users are interacting with AI. Emotion recognition and biometric categorisation systems must inform users. AI-generated content must be labelled.
- Minimal risk: The vast majority of AI systems, such as spam filters, AI-powered inventory management, or recommendation engines. These face no specific obligations under the Act, though voluntary codes of conduct are encouraged.
General-purpose AI models, including large language models like GPT-4 and Claude, have their own set of requirements. Providers must maintain technical documentation, comply with EU copyright law, and publish summaries of training data. Models deemed to pose "systemic risk" face additional obligations including adversarial testing and incident reporting.
Impact on UK Businesses
Post-Brexit, the UK is not directly subject to the EU AI Act. However, the Act has extraterritorial reach, much like GDPR. UK businesses are affected if they:
- Place AI systems on the EU market or put them into service in the EU
- Develop AI systems whose outputs are used within the EU
- Are importers or distributors of AI systems in the EU
- Provide AI services to EU-based customers
In practice, this means most UK businesses with any EU commercial activity need to pay attention. If you develop an AI-powered recruitment tool used by an EU client, you're in scope. If your SaaS product uses AI to process data for EU customers, you're likely in scope. The "Brussels Effect" — where EU regulation becomes a de facto global standard because compliance is easier than maintaining separate systems — is already playing out with AI, just as it did with data protection.
The UK's Own Approach
The UK has deliberately taken a different path. Rather than a single, comprehensive AI law, the UK government has adopted a "pro-innovation" approach that works through existing sector regulators. The FCA, ICO, Ofcom, CMA, and other regulators are each developing AI guidance for their sectors.
This means UK businesses face a more fragmented but potentially more flexible regulatory environment. The upside is less prescriptive regulation. The downside is less certainty — you need to track guidance from multiple regulators, and the rules may differ depending on your sector.
For businesses operating in both markets, the pragmatic approach is to treat EU AI Act compliance as your baseline and then adjust for any additional UK-specific requirements. This mirrors what many organisations did with GDPR and the UK GDPR.
Practical Compliance Steps
Whether you're directly in scope or simply want to prepare for the direction of travel, here are the steps UK businesses should take now:
- Audit your AI systems. Create an inventory of all AI tools and systems in use across your organisation. For each, document its purpose, data inputs, decision-making role, and who it affects.
- Classify by risk. Map each system against the Act's risk categories. Most business AI systems will fall into minimal or limited risk, but anything involving decisions about people (hiring, lending, access to services) may be high risk.
- Address high-risk gaps. For any high-risk systems, assess your current practices against the Act's requirements. Key areas to check: human oversight mechanisms, documentation, bias testing, and data governance.
- Update your contracts. Review agreements with AI vendors and customers. Ensure responsibilities for compliance are clearly allocated, particularly for AI systems deployed across borders.
- Build governance. Establish an AI governance framework that covers approval processes, risk assessment, monitoring, and incident response. This is good practice regardless of which regulations apply to you.
The EU AI Act is a significant piece of legislation, but it shouldn't be paralysing. Most of its requirements align with good AI practice that responsible businesses should be following anyway. The organisations that will find compliance easiest are those that have already invested in governance, documentation, and responsible AI practices.
Need help understanding how the EU AI Act affects your business? Grove AI offers AI compliance assessments that map your current AI usage against regulatory requirements and give you a clear action plan. Book a consultation to get started.