GroveAI
Compliance

How to Write an AI Acceptable Use Policy

Every organisation using AI needs clear rules of the road. Here's how to write an AI Acceptable Use Policy that actually gets followed.

1 March 20268 min read

AI tools are already inside your organisation. Even if you haven't officially adopted them, your teams are using ChatGPT, Copilot, and a dozen other tools to draft emails, summarise documents, and generate code. The question is not whether your people are using AI — it's whether they're using it safely.

An AI Acceptable Use Policy (AUP) sets the boundaries. It tells your people which tools are approved, what data they can and cannot share with AI systems, and what happens when something goes wrong. Without one, you're relying on individual judgement — and that's a recipe for data breaches, compliance failures, and reputational damage.

Why Every Organisation Needs an AI Acceptable Use Policy

The regulatory landscape is shifting fast. The EU AI Act is already in force, the UK government's pro-innovation approach still expects responsible use, and sector-specific regulators — from the FCA to the ICO — are increasingly scrutinising how organisations use AI.

But compliance is only one reason. A well-written AUP also:

  • Reduces shadow AI risk — When people know which tools are approved, they stop smuggling data into unapproved ones
  • Protects sensitive data — Clear rules about what can and cannot be entered into AI systems prevent accidental data leaks
  • Builds trust — Clients and partners want to know you have guardrails in place
  • Creates consistency — Everyone operates under the same rules, reducing confusion and liability

If your organisation has more than a handful of employees and no AI acceptable use policy, you're exposed. The good news is that writing one is not as difficult as it sounds.

Key Sections Your Policy Should Include

Every AI AUP needs to cover the same core areas. Here's the structure we recommend to our clients:

1. Scope and purpose. State who the policy applies to (all staff, contractors, third parties) and why it exists. Keep it plain English — if people don't understand it, they won't follow it.

2. Approved tools and platforms. List the AI tools your organisation has vetted and approved. This should include both enterprise tools (e.g., Microsoft Copilot with your data residency settings) and any consumer-grade tools that are permitted for specific use cases. Be explicit: if a tool is not on the list, it's not approved.

3. Data handling and classification. This is the most critical section. Define what data can be used with AI tools by classification level. For example:

  • Public data: Can be used freely with approved tools
  • Internal data: Can be used with enterprise-licensed tools only
  • Confidential data: Can only be used with on-premise or private AI deployments
  • Restricted data: Must never be entered into any AI system (e.g., passwords, PII without consent)

4. Prohibited uses. Be explicit about what is not allowed. Common prohibitions include: using AI to make automated decisions about people without human review, entering client data into consumer AI tools, using AI-generated content without review, and relying on AI for legal, medical, or financial advice without expert oversight.

5. Human oversight and review. Define where human review is mandatory. Any AI output that will be sent to a client, published externally, or used in a decision-making process should require human sign-off.

6. Incident reporting. What happens if someone accidentally shares sensitive data with an AI tool? Your policy needs a clear process: who to contact, what to document, and how incidents are investigated.

7. Review and update cadence. AI moves fast. Your policy should be reviewed at least quarterly and updated whenever new tools are adopted or regulations change.

A Template Structure You Can Adapt

We've published a free AI Acceptable Use Policy template that you can download and adapt for your organisation. It includes all seven sections above, with placeholder text and examples you can customise.

The template is designed to be practical, not legalistic. It uses plain language, includes real-world examples, and can be adapted for organisations of any size. We recommend starting with the template and then working with your legal and compliance teams to tailor it to your specific regulatory environment.

A few tips for adapting it:

  • Start narrow, then expand. It's better to approve three tools with clear rules than to try to cover every possible AI use case on day one
  • Include real examples. Instead of "do not share confidential data," say "do not paste client contracts, financial statements, or employee records into ChatGPT or similar tools"
  • Make it findable. The policy should live where your team can actually find it — not buried in a SharePoint folder nobody checks

Common Mistakes to Avoid

We've reviewed dozens of AI policies across different organisations. The same mistakes come up repeatedly:

Being too vague. "Use AI responsibly" is not a policy. If your AUP doesn't give people specific, actionable guidance, they'll interpret it however suits them.

Banning everything. An outright ban on AI tools doesn't stop people using them — it just drives usage underground. Shadow AI is far more dangerous than governed AI. If you ban all tools, you lose visibility entirely.

Writing it and forgetting it. A policy that was written in 2024 and never updated is worse than no policy at all, because it gives a false sense of security. AI capabilities change every few months. Your policy needs to keep pace.

No enforcement mechanism. A policy without consequences is a suggestion. Make sure your AUP is integrated into your broader compliance framework, with clear accountability for violations.

Ignoring training. Publishing a policy is not enough. Your teams need training on what the policy means in practice, with role-specific examples. A marketing team needs different guidance from an engineering team.

Fitting the AUP into Your Broader AI Governance

An AI Acceptable Use Policy is one piece of a larger AI governance framework. On its own, it sets the rules. But without supporting structures — training, monitoring, incident response, and regular review — it won't be effective.

We recommend treating the AUP as the starting point. Once it's in place, you can build out a full governance framework that includes risk assessments for specific AI use cases, vendor evaluation criteria, model monitoring, and audit trails.

The organisations that get this right are the ones that treat AI governance as an ongoing programme, not a one-off document. They review their policies regularly, update them as new tools and regulations emerge, and invest in training their teams to use AI effectively and responsibly.


At Grove AI, we help organisations build practical AI governance frameworks, including acceptable use policies, risk assessments, and training programmes. If you need help getting your AI governance in order, book a free consultation and we'll help you get started.

Grove AI

AI Consultancy

Grove AI helps businesses adopt artificial intelligence fast. From strategy to production in weeks, not months.

Share

Ready to implement?

Book a free strategy call and we'll help you apply these ideas to your business.