AI Security Checklist Template
A comprehensive security checklist for AI systems covering the OWASP Top 10 for LLMs, prompt injection prevention, data leakage protection, access control, model security, and compliance requirements. Designed for security teams and developers building production AI applications.
Overview
What's included
Prompt Injection Prevention
Prompt Injection Prevention
System name: Assessor: Date:
Input Controls
- User input is separated from system instructions (parameterised prompts)
- Input length is capped at characters/tokens
- Input is sanitised for known injection patterns
- Special characters and encoding attacks are handled
- A content filter scans input before it reaches the LLM
System Prompt Protection
- System prompt does not contain secrets, API keys, or sensitive data
- System prompt includes explicit instructions to ignore override attempts
- System prompt is not returned or exposed in error messages
- System prompt is stored securely and version-controlled
Output Controls
- LLM output is validated before being executed or displayed
- Output is not directly used in code execution (eval, exec, SQL)
- Structured output is validated against a schema before use
- Output is scanned for PII leakage before delivery to users
Indirect Injection (RAG/Context)
- Retrieved content is treated as untrusted input
- Retrieved documents are scanned for injected instructions
- Content from external sources is sandboxed in the prompt
- The LLM is instructed to ignore instructions found in context documents
Testing
- Prompt injection test suite runs on every deployment
- Red-team testing conducted at least quarterly
- New injection techniques are added to the test suite as discovered
Data Security & Privacy
Data Security & Privacy
Data in Transit
- All AI API calls use HTTPS/TLS 1.2 or higher
- API keys are transmitted via headers, not URL parameters
- WebSocket connections (if used) are encrypted (WSS)
- Internal service-to-service communication is encrypted
Data at Rest
- Training data is encrypted at rest
- User conversation logs are encrypted at rest
- Vector store data is encrypted at rest
- Backup data is encrypted
Data Minimisation
- Only necessary data is sent to the AI provider
- PII is stripped or pseudonymised before AI processing where possible
- Conversation history is limited to messages / tokens
- Training data retention policy: days
Third-Party AI Provider
- Data processing agreement (DPA) in place
- Provider's data retention policy reviewed: days / no retention
- Provider does NOT use our data for model training (confirmed)
- Data residency requirements met: data stays in
- Provider security certifications verified: SOC2 / ISO27001 /
PII Handling
| Data Type | Present in Input? | Present in Output? | Mitigation |
|---|---|---|---|
| Names | Yes/No | Yes/No | |
| Email addresses | Yes/No | Yes/No | |
| Phone numbers | Yes/No | Yes/No | |
| Addresses | Yes/No | Yes/No | |
| Financial data | Yes/No | Yes/No | |
| Health data | Yes/No | Yes/No |
Access Control & Audit
Access Control & Audit
Authentication
- AI API endpoints require authentication
- API keys are unique per environment (dev, staging, production)
- API keys are stored in a secret manager (not in code or config files)
- API key rotation is scheduled every days
- Expired/compromised keys can be revoked immediately
Authorisation
- Users can only access AI features appropriate to their role
- Admin functions (prompt editing, model configuration) require elevated access
- Rate limiting is applied per user/API key: requests per
- IP allowlisting is configured for production endpoints (if applicable)
Audit Trail
- All AI API requests are logged with timestamp, user ID, and action
- All prompt changes are logged with author and version
- All model/configuration changes are logged
- All admin actions are logged
- Audit logs are immutable and retained for months
- PII in audit logs is redacted or encrypted
Incident Response
- AI-specific incident response plan exists
- Incident severity levels defined for AI failures
- On-call rotation covers AI systems
- Communication templates prepared for AI incidents
- Post-incident review process includes AI-specific root cause analysis
Security Review Schedule
| Review | Frequency | Last Completed | Next Due | Owner |
|---|---|---|---|---|
| Prompt injection testing | Monthly | |||
| Access control audit | Quarterly | |||
| Vendor security review | Annually | |||
| Penetration test (AI features) | Annually | |||
| Red-team exercise | Bi-annually |
Instructions
How to use this template
Complete the checklist for each AI system
Work through every item with your development and security teams. Mark items as done, in progress, or not applicable with justification.
Prioritise critical gaps
Address prompt injection prevention and data security first. These are the most common and highest-impact AI security risks.
Integrate into your CI/CD pipeline
Automate as many security checks as possible: input validation, output scanning, and prompt injection tests should run on every deployment.
Schedule regular reviews
AI security threats evolve quickly. Review and update the checklist quarterly and after any security incident.
Watch Out
Common mistakes to avoid
FAQ
Frequently asked questions
The OWASP Top 10 for LLMs identifies prompt injection as the #1 risk, followed by insecure output handling, training data poisoning, model denial of service, supply chain vulnerabilities, sensitive information disclosure, insecure plugin design, excessive agency, overreliance, and model theft.
Use a defence-in-depth approach: separate user input from system instructions, validate and sanitise inputs, scan for injection patterns, validate outputs before use, and do not use LLM output directly in code execution. No single defence is foolproof — layer multiple controls.
Generally no. System prompts often contain business logic, constraints, and instructions that could be exploited if exposed. Treat system prompts as confidential configuration.
Follow your standard incident response process with AI-specific additions: preserve AI logs (inputs, outputs, prompt versions), assess whether the model was manipulated or data was leaked, and review recent prompt or configuration changes as potential root causes.
Yes. Include AI-specific attack scenarios in your regular penetration testing: prompt injection, data extraction attempts, privilege escalation via AI, and abuse of AI-powered features. Consider engaging testers with AI security expertise.
Need a custom AI template?
Our team can build tailored templates for your specific business needs. Book a free strategy call.