GroveAI
BusinessFree Template

AI Acceptable Use Policy Template

A ready-to-customise policy that defines how employees may use AI tools at work. Covers approved tools, data classification rules, prohibited uses, and accountability expectations. Essential for any organisation where employees are using ChatGPT, Copilot, or other AI tools.

Overview

What's included

Policy purpose, scope, and definitions
Approved and prohibited AI tool lists
Data classification rules for AI use
Specific do's and don'ts for employees
Accountability and compliance requirements
Reporting and escalation procedures
Training requirements and acknowledgement form
1

Policy Overview

AI Acceptable Use Policy

Organisation:   Effective date:   Policy owner:   Approved by:   Review date:  

Purpose

This policy defines the acceptable use of artificial intelligence (AI) tools by employees of [Organisation name]. It ensures that AI is used productively, safely, and in compliance with our legal and ethical obligations.

Scope

This policy applies to:

  • All employees, contractors, and temporary workers
  • All AI tools including but not limited to: large language models (e.g. ChatGPT, Claude, Gemini), code assistants (e.g. GitHub Copilot), image generators, and AI features embedded in existing software
  • Use of AI tools for work purposes, whether on company or personal devices

Definitions

  • AI tool: Any software that uses artificial intelligence or machine learning to generate text, code, images, or other outputs
  • Approved AI tool: An AI tool that has been reviewed and approved for use by [IT/Security team]
  • Confidential data: Data classified as confidential or above under our data classification policy
  • Personal data: Any information relating to an identified or identifiable individual (as defined by GDPR)

Key Principles

  1. AI assists, humans decide. AI outputs must be reviewed and validated by a human before use.
  2. Protect our data. Never share confidential or personal data with unapproved AI tools.
  3. Be transparent. Disclose AI use when it materially contributes to work shared with others.
  4. Take responsibility. You are accountable for any AI output you use or share.
2

Approved Tools & Data Rules

Approved Tools & Data Rules

Approved AI Tools

ToolApproved UseData Classification AllowedNotes
  Public / Internal only 
  Public / Internal only 
  Public only 

Prohibited AI Tools

ToolReasonAlternative
 Does not meet security requirementsUse   instead
 No DPA in placeUse   instead

Data Classification Rules

Public data — May be used with approved AI tools

  • Published marketing materials
  • Publicly available information
  • General knowledge questions

Internal data — May be used ONLY with enterprise-licensed AI tools

  • Internal processes and procedures
  • Non-sensitive business documents
  • Anonymised operational data

Confidential data — MUST NOT be entered into any external AI tool

  • Financial results and forecasts
  • Strategic plans and M&A activity
  • Employee performance data
  • Customer lists and pricing
  • Intellectual property and trade secrets

Personal data — MUST NOT be entered into any AI tool without explicit approval

  • Customer names, emails, phone numbers
  • Employee personal information
  • Health or financial data
  • Any data subject to GDPR

Quick Decision Guide

Before using an AI tool, ask yourself:

  1. Is this an approved tool? If not, do not use it.
  2. What data am I sharing? Check the classification above.
  3. Could the data identify a person? If yes, do not share it.
  4. Would I be comfortable if this data appeared publicly? If not, do not share it.
3

Do's and Don'ts

17 itemsto complete

Do's and Don'ts

DO

  • Use approved AI tools to improve productivity (drafting, summarising, brainstorming)
  • Review and edit all AI-generated content before using it
  • Cite or disclose AI assistance when it materially contributes to deliverables
  • Report any AI security concerns to [IT/Security team email]
  • Check AI outputs for accuracy, bias, and appropriateness
  • Use AI to learn and upskill — explore new tools within the approved list
  • Follow your team's specific AI usage guidelines where they exist

DON'T

  • Enter confidential data, personal data, or trade secrets into AI tools
  • Use AI-generated content without reviewing it for accuracy and quality
  • Copy and paste customer data, financial data, or HR data into AI tools
  • Use AI to make decisions about individuals without human oversight
  • Present AI-generated work as entirely your own in formal deliverables
  • Use unapproved AI tools for work purposes
  • Rely on AI for legal, financial, or medical advice without expert review
  • Use AI to generate content that is misleading, discriminatory, or harmful
  • Share your AI tool login credentials with others
  • Disable or bypass any AI safety features or content filters

Specific Use Cases

Use CasePermitted?Conditions
Drafting emails and documentsYesReview before sending; no confidential data
Summarising meeting notesYesNo confidential or personal data
Code generation (Copilot)YesReview for security vulnerabilities
Customer communication draftsYesReview tone and accuracy; no PII in prompts
Data analysisConditionalOnly anonymised, non-confidential data
HR decisions (screening, reviews)NoNot permitted without governance approval
Financial reportingNoNot for final figures; draft assistance only
Legal document draftingNoMust involve legal team review
4

Compliance & Training

Compliance & Training

Compliance

  • Policy violations will be addressed through the standard disciplinary process
  • Accidental data sharing should be reported immediately to [Security team] at [email]
  • Managers are responsible for ensuring their teams understand and follow this policy
  • IT/Security will monitor approved AI tool usage for compliance

Reporting

If you notice any of the following, report it immediately:

  • Confidential or personal data shared with an AI tool
  • AI-generated content that is inaccurate, biased, or harmful
  • Use of unapproved AI tools for work purposes
  • AI security vulnerabilities or unusual behaviour

Report to:   Email:   Slack channel:  

Training Requirements

TrainingAudienceFrequencyDuration
AI Acceptable Use Policy overviewAll employeesOn hire + annually30 minutes
AI data classificationAll employeesAnnually15 minutes
AI tool training (approved tools)Tool usersOn first use1 hour
AI governance for managersPeople managersAnnually45 minutes

Policy Acknowledgement

I have read and understood the AI Acceptable Use Policy. I agree to comply with its requirements.

Name:   Role:   Signature:   Date:  

Policy Review

This policy will be reviewed every   months or when:

  • New AI tools are introduced
  • Regulations change (e.g. EU AI Act updates)
  • A significant AI incident occurs
  • Material changes to our AI strategy

Instructions

How to use this template

1

Customise the policy for your organisation

Fill in your organisation name, approved tool list, data classification rules, and reporting contacts. Adjust the tone to match your company culture.

2

Review with legal and compliance

Have your legal team review the policy for alignment with GDPR, employment law, and any industry-specific regulations.

3

Get leadership endorsement

Publish the policy with a message from senior leadership explaining why AI use guidelines matter.

4

Train employees

Run interactive training sessions — not just a document to sign. Use real examples of do's and don'ts.

5

Collect acknowledgements

Require every employee to sign the acknowledgement form. Track completion through HR or your learning management system.

6

Review and update regularly

AI tools and regulations change rapidly. Review the policy at least every 6 months and update the approved tools list as needed.

Watch Out

Common mistakes to avoid

Banning AI entirely — employees will use it anyway; providing guidance is more effective than prohibition.
Making the policy too long and legalistic — employees need practical, actionable guidelines they can remember.
Not updating the approved tools list — new AI tools launch constantly; keep the list current.
Forgetting training — a policy without training is just a document nobody reads.
Not distinguishing between data types — blanket rules are less effective than data-classification-based guidance.

FAQ

Frequently asked questions

This depends on your organisation's policy. Many organisations allow use of ChatGPT for general productivity tasks (drafting, brainstorming) with public or internal data, but prohibit sharing confidential or personal data. The key is having clear guidelines.

Treat it as a data incident: report immediately, assess the data shared, review the AI provider's data retention policy, and take steps to minimise impact. Use it as a learning opportunity rather than purely a disciplinary matter.

One organisation-wide policy provides the baseline. Departments with specific needs (e.g. engineering using Copilot, marketing using image generators) can have supplementary guidelines within the framework of the main policy.

Use a combination of: technical controls (approved tool provisioning, network restrictions), training and awareness, management accountability, and spot-check audits. Focus on building a culture of responsible AI use rather than surveillance.

This is a risk-based decision. At minimum, personal AI tool use should comply with the same data rules: no confidential or personal data. Many organisations allow personal accounts for non-work use but restrict work-related use to approved enterprise tools.

Need a custom AI template?

Our team can build tailored templates for your specific business needs. Book a free strategy call.