GroveAI
Examples

Code Review Prompt Examples

AI prompts for systematic code review — security analysis, performance optimisation, architecture evaluation, and maintainability assessment.

Comprehensive Code Review Prompt

intermediate

A thorough code review prompt that covers correctness, security, performance, maintainability, and best practices in a structured format.

Review the following code. Provide feedback in these categories:

**1. Correctness**
- Logic errors or edge cases not handled
- Off-by-one errors, null reference risks
- Race conditions or concurrency issues

**2. Security**
- Input validation gaps
- Injection vulnerabilities (SQL, XSS, command)
- Authentication/authorisation issues
- Sensitive data exposure
- OWASP Top 10 relevance

**3. Performance**
- Unnecessary computations or database queries
- N+1 query problems
- Missing caching opportunities
- Memory leaks or excessive allocation

**4. Maintainability**
- Code clarity and naming
- Function/method length and complexity
- DRY violations
- Missing error handling

**5. Best Practices**
- Language/framework idioms not followed
- Missing or inadequate tests
- API design issues

For each issue:
- Severity: Critical / Warning / Suggestion
- Line reference
- What the issue is
- How to fix it (with code example if helpful)

Code:
```[LANGUAGE]
[PASTE CODE]
```

Key takeaway: Structured code review prompts that separate concerns (security, performance, readability) produce more actionable feedback than 'review this code'.

Security-Focused Code Review Prompt

advanced

A deep security review prompt that checks for OWASP vulnerabilities, authentication issues, data handling problems, and common security anti-patterns.

Perform a security-focused review of this code. Check for:

1. **Injection Flaws** (OWASP A03):
   - SQL injection
   - NoSQL injection
   - Command injection
   - LDAP injection
   - Template injection

2. **Broken Authentication** (OWASP A07):
   - Weak password handling
   - Session management issues
   - Missing rate limiting on auth endpoints

3. **Sensitive Data Exposure** (OWASP A02):
   - Secrets in source code
   - Unencrypted sensitive data
   - Excessive logging of PII
   - Improper error messages revealing internals

4. **Access Control** (OWASP A01):
   - Missing authorisation checks
   - IDOR vulnerabilities
   - Privilege escalation paths

5. **Input Validation**:
   - Unvalidated user input
   - Missing type checks
   - File upload vulnerabilities
   - Path traversal risks

For each finding, provide:
- CWE number (if applicable)
- Severity: Critical / High / Medium / Low
- Exploit scenario (how an attacker would exploit this)
- Recommended fix with code example

Code:
```[LANGUAGE]
[PASTE CODE]
```

Key takeaway: Security review prompts should reference specific vulnerability categories (OWASP, CWE) to ensure comprehensive coverage.

Pull Request Review Prompt

beginner

Reviews a pull request diff considering the context of the change, its impact on the broader codebase, and whether it follows team conventions.

Review this pull request.

PR Title: [TITLE]
PR Description: [WHAT THIS CHANGE DOES AND WHY]
Author: [AUTHOR EXPERIENCE LEVEL: junior / mid / senior]
Files changed: [NUMBER]

Diff:
```
[PASTE DIFF]
```

Review for:
1. Does the code do what the PR description says?
2. Are there any bugs or edge cases missed?
3. Is the approach reasonable, or is there a simpler way?
4. Are there any security or performance concerns?
5. Is the code well-tested? What test cases are missing?
6. Does it follow the team's conventions and patterns?

Provide feedback as:
- 🔴 **Must fix**: Issues that should block merging
- 🟡 **Should fix**: Issues that are important but not blocking
- 💭 **Consider**: Suggestions for improvement (optional)
- 👍 **Nice**: Things done well (important for author morale)

Be constructive and explain the "why" behind each suggestion. For junior authors, be more detailed in explanations.

Key takeaway: Providing the PR description and context alongside the diff produces far more relevant reviews than reviewing code in isolation.

Refactoring Suggestion Prompt

intermediate

Identifies refactoring opportunities in existing code and provides specific, actionable suggestions with before/after code examples.

Analyse this code for refactoring opportunities.

Context: [WHAT THIS CODE DOES AND HOW IT'S USED]
Language: [LANGUAGE]
Framework: [FRAMEWORK IF APPLICABLE]
Constraints: [ANY CONSTRAINTS — e.g., must maintain backward compatibility, performance critical]

Code:
```[LANGUAGE]
[PASTE CODE]
```

Identify refactoring opportunities in these categories:

1. **Complexity Reduction**: Functions or methods that do too much
2. **Duplication**: Repeated logic that could be extracted
3. **Naming**: Variables, functions, or classes with unclear names
4. **Design Patterns**: Opportunities to apply well-known patterns
5. **Modern Language Features**: Using newer language features for clarity

For each suggestion:
- What to change and why
- Before (current code snippet)
- After (refactored code snippet)
- Risk level: Low / Medium / High (how likely is this to introduce bugs)
- Priority: Do first / Do when convenient / Nice to have

Do NOT suggest changes that are purely stylistic with no readability benefit. Focus on changes that meaningfully improve the code.

Key takeaway: Effective refactoring suggestions include both the 'before' and 'after' code so developers can see exactly what changes look like.

API Design Review Prompt

advanced

Reviews an API design (REST, GraphQL, or RPC) for consistency, usability, error handling, versioning, and adherence to industry best practices.

Review this API design for best practices and consistency.

API type: [REST / GraphQL / gRPC]
API purpose: [WHAT THE API DOES]
Target consumers: [INTERNAL / EXTERNAL / BOTH]

API specification:
```
[PASTE API SPEC, OPENAPI YAML, OR ENDPOINT LIST]
```

Review for:

1. **Naming Conventions**: Consistent resource names, HTTP methods, URL patterns
2. **Error Handling**: Consistent error format, appropriate HTTP status codes, helpful error messages
3. **Pagination**: Properly implemented for list endpoints
4. **Filtering/Sorting**: Consistent parameter patterns
5. **Versioning**: Strategy and implementation
6. **Authentication/Authorisation**: Appropriate for each endpoint
7. **Rate Limiting**: Headers and behaviour
8. **Idempotency**: Safe retry behaviour for write operations
9. **Documentation**: Complete, accurate, with examples

For each issue:
- Category
- Endpoint affected
- Current design
- Recommended change
- Rationale (link to industry best practice if relevant)

Also suggest any missing endpoints that would commonly be expected for this type of API.

Key takeaway: API reviews are most valuable during design phase — catching inconsistencies before implementation saves significant rework.

Patterns

Key patterns to follow

  • Category-separated reviews (security, performance, style) produce more thorough feedback than open-ended review requests
  • Providing context (PR description, author experience) makes reviews more relevant and appropriately calibrated
  • Before/after code examples in refactoring suggestions reduce ambiguity and speed up implementation
  • Severity levels help developers prioritise which feedback to act on first

FAQ

Frequently asked questions

AI is an excellent first-pass reviewer that catches common issues quickly, but human reviewers provide architectural judgment, team context, and mentoring that AI cannot. The best approach is AI-assisted review where AI handles mechanical checks and humans focus on design decisions.

AI catches 60-80% of common issues (bugs, security flaws, style violations) reliably. It is less reliable for architectural concerns, business logic errors, and subtle race conditions. Use it as a complement to human review, not a replacement.

Use enterprise AI services with data protection agreements that prohibit training on your code. Avoid sharing production secrets, API keys, or customer data in code snippets. Many organisations use self-hosted models for code review of sensitive codebases.

Most teams add AI review as a CI step that runs on pull requests alongside linting and tests. Tools like GitHub Copilot, CodeRabbit, and custom integrations can provide automated PR comments. Keep human reviewers as the final approval gate.

AI performs best on widely-used languages with large training corpora: Python, JavaScript/TypeScript, Java, Go, and C#. It works reasonably well on most popular languages. Niche or domain-specific languages may see lower accuracy.

Need custom AI implementation?

Our team can help you build production-ready AI solutions. Book a free strategy call.