AI Acceptable Use Policy Template
A ready-to-customise policy that defines how employees may use AI tools at work. Covers approved tools, data classification rules, prohibited uses, and accountability expectations. Essential for any organisation where employees are using ChatGPT, Copilot, or other AI tools.
Overview
What's included
Policy Overview
AI Acceptable Use Policy
Organisation: Effective date: Policy owner: Approved by: Review date:
Purpose
This policy defines the acceptable use of artificial intelligence (AI) tools by employees of [Organisation name]. It ensures that AI is used productively, safely, and in compliance with our legal and ethical obligations.
Scope
This policy applies to:
- All employees, contractors, and temporary workers
- All AI tools including but not limited to: large language models (e.g. ChatGPT, Claude, Gemini), code assistants (e.g. GitHub Copilot), image generators, and AI features embedded in existing software
- Use of AI tools for work purposes, whether on company or personal devices
Definitions
- AI tool: Any software that uses artificial intelligence or machine learning to generate text, code, images, or other outputs
- Approved AI tool: An AI tool that has been reviewed and approved for use by [IT/Security team]
- Confidential data: Data classified as confidential or above under our data classification policy
- Personal data: Any information relating to an identified or identifiable individual (as defined by GDPR)
Key Principles
- AI assists, humans decide. AI outputs must be reviewed and validated by a human before use.
- Protect our data. Never share confidential or personal data with unapproved AI tools.
- Be transparent. Disclose AI use when it materially contributes to work shared with others.
- Take responsibility. You are accountable for any AI output you use or share.
Approved Tools & Data Rules
Approved Tools & Data Rules
Approved AI Tools
| Tool | Approved Use | Data Classification Allowed | Notes |
|---|---|---|---|
| Public / Internal only | |||
| Public / Internal only | |||
| Public only |
Prohibited AI Tools
| Tool | Reason | Alternative |
|---|---|---|
| Does not meet security requirements | Use instead | |
| No DPA in place | Use instead |
Data Classification Rules
Public data — May be used with approved AI tools
- Published marketing materials
- Publicly available information
- General knowledge questions
Internal data — May be used ONLY with enterprise-licensed AI tools
- Internal processes and procedures
- Non-sensitive business documents
- Anonymised operational data
Confidential data — MUST NOT be entered into any external AI tool
- Financial results and forecasts
- Strategic plans and M&A activity
- Employee performance data
- Customer lists and pricing
- Intellectual property and trade secrets
Personal data — MUST NOT be entered into any AI tool without explicit approval
- Customer names, emails, phone numbers
- Employee personal information
- Health or financial data
- Any data subject to GDPR
Quick Decision Guide
Before using an AI tool, ask yourself:
- Is this an approved tool? If not, do not use it.
- What data am I sharing? Check the classification above.
- Could the data identify a person? If yes, do not share it.
- Would I be comfortable if this data appeared publicly? If not, do not share it.
Do's and Don'ts
Do's and Don'ts
DO
- Use approved AI tools to improve productivity (drafting, summarising, brainstorming)
- Review and edit all AI-generated content before using it
- Cite or disclose AI assistance when it materially contributes to deliverables
- Report any AI security concerns to [IT/Security team email]
- Check AI outputs for accuracy, bias, and appropriateness
- Use AI to learn and upskill — explore new tools within the approved list
- Follow your team's specific AI usage guidelines where they exist
DON'T
- Enter confidential data, personal data, or trade secrets into AI tools
- Use AI-generated content without reviewing it for accuracy and quality
- Copy and paste customer data, financial data, or HR data into AI tools
- Use AI to make decisions about individuals without human oversight
- Present AI-generated work as entirely your own in formal deliverables
- Use unapproved AI tools for work purposes
- Rely on AI for legal, financial, or medical advice without expert review
- Use AI to generate content that is misleading, discriminatory, or harmful
- Share your AI tool login credentials with others
- Disable or bypass any AI safety features or content filters
Specific Use Cases
| Use Case | Permitted? | Conditions |
|---|---|---|
| Drafting emails and documents | Yes | Review before sending; no confidential data |
| Summarising meeting notes | Yes | No confidential or personal data |
| Code generation (Copilot) | Yes | Review for security vulnerabilities |
| Customer communication drafts | Yes | Review tone and accuracy; no PII in prompts |
| Data analysis | Conditional | Only anonymised, non-confidential data |
| HR decisions (screening, reviews) | No | Not permitted without governance approval |
| Financial reporting | No | Not for final figures; draft assistance only |
| Legal document drafting | No | Must involve legal team review |
Compliance & Training
Compliance & Training
Compliance
- Policy violations will be addressed through the standard disciplinary process
- Accidental data sharing should be reported immediately to [Security team] at [email]
- Managers are responsible for ensuring their teams understand and follow this policy
- IT/Security will monitor approved AI tool usage for compliance
Reporting
If you notice any of the following, report it immediately:
- Confidential or personal data shared with an AI tool
- AI-generated content that is inaccurate, biased, or harmful
- Use of unapproved AI tools for work purposes
- AI security vulnerabilities or unusual behaviour
Report to: Email: Slack channel:
Training Requirements
| Training | Audience | Frequency | Duration |
|---|---|---|---|
| AI Acceptable Use Policy overview | All employees | On hire + annually | 30 minutes |
| AI data classification | All employees | Annually | 15 minutes |
| AI tool training (approved tools) | Tool users | On first use | 1 hour |
| AI governance for managers | People managers | Annually | 45 minutes |
Policy Acknowledgement
I have read and understood the AI Acceptable Use Policy. I agree to comply with its requirements.
Name: Role: Signature: Date:
Policy Review
This policy will be reviewed every months or when:
- New AI tools are introduced
- Regulations change (e.g. EU AI Act updates)
- A significant AI incident occurs
- Material changes to our AI strategy
Instructions
How to use this template
Customise the policy for your organisation
Fill in your organisation name, approved tool list, data classification rules, and reporting contacts. Adjust the tone to match your company culture.
Review with legal and compliance
Have your legal team review the policy for alignment with GDPR, employment law, and any industry-specific regulations.
Get leadership endorsement
Publish the policy with a message from senior leadership explaining why AI use guidelines matter.
Train employees
Run interactive training sessions — not just a document to sign. Use real examples of do's and don'ts.
Collect acknowledgements
Require every employee to sign the acknowledgement form. Track completion through HR or your learning management system.
Review and update regularly
AI tools and regulations change rapidly. Review the policy at least every 6 months and update the approved tools list as needed.
Watch Out
Common mistakes to avoid
FAQ
Frequently asked questions
This depends on your organisation's policy. Many organisations allow use of ChatGPT for general productivity tasks (drafting, brainstorming) with public or internal data, but prohibit sharing confidential or personal data. The key is having clear guidelines.
Treat it as a data incident: report immediately, assess the data shared, review the AI provider's data retention policy, and take steps to minimise impact. Use it as a learning opportunity rather than purely a disciplinary matter.
One organisation-wide policy provides the baseline. Departments with specific needs (e.g. engineering using Copilot, marketing using image generators) can have supplementary guidelines within the framework of the main policy.
Use a combination of: technical controls (approved tool provisioning, network restrictions), training and awareness, management accountability, and spot-check audits. Focus on building a culture of responsible AI use rather than surveillance.
This is a risk-based decision. At minimum, personal AI tool use should comply with the same data rules: no confidential or personal data. Many organisations allow personal accounts for non-work use but restrict work-related use to approved enterprise tools.
Need a custom AI template?
Our team can build tailored templates for your specific business needs. Book a free strategy call.