From latestaiagents
Use this skill when reviewing AI-generated code. Activate when the user has code generated by an AI assistant and wants to review it, validate AI output, check for common AI mistakes, ensure code quality of generated code, or verify that AI-generated code follows best practices before merging.
npx claudepluginhub latestaiagents/agent-skills --plugin skills-authoringThis skill uses the workspace's default tool permissions.
Systematically review AI-generated code to catch common mistakes before they hit production.
Performs AI-powered code reviews with static analysis (CodeQL, SonarQube, Semgrep) and AI tools for security, performance, bugs, and maintainability across 30+ languages in CI/CD.
Performs AI-powered code reviews using static analysis tools like CodeQL, SonarQube, Semgrep and AI models for security, performance, architecture, maintainability, and testing issues across 30+ languages.
Conducts structured code reviews for security vulnerabilities, correctness bugs, performance issues, maintainability, and testing gaps using checklists and scans. Use for reviewing code, auditing, or bug checks.
Share bugs, ideas, or general feedback.
Systematically review AI-generated code to catch common mistakes before they hit production.
AI often generates code that looks correct but has subtle bugs.
Check for:
Hallucinated APIs - Methods/functions that don't exist
// AI might generate:
array.findLast(x => x.id === id) // Verify this exists in your target
Wrong library versions - API changes between versions
// React 18 vs 19 differences
// Node.js API differences
Off-by-one errors - Loop bounds, array indices
for (let i = 0; i <= arr.length; i++) // Should be <
Incorrect null/undefined handling
user.profile.name // What if profile is undefined?
AI doesn't prioritize security unless explicitly asked.
Check for:
SQL Injection
// BAD: AI might generate
db.query(`SELECT * FROM users WHERE id = ${userId}`)
// GOOD: Parameterized
db.query('SELECT * FROM users WHERE id = $1', [userId])
XSS Vulnerabilities
// BAD: Direct HTML insertion
element.innerHTML = userInput
// GOOD: Escaped or use framework
element.textContent = userInput
Exposed Secrets
// AI might hardcode values from context
const API_KEY = 'sk-abc123...' // Should be env var
Missing Input Validation
// AI often skips validation
function processData(data) {
return data.items.map(...) // What if data is null?
}
AI optimizes for "looks correct" not "performs well."
Check for:
N+1 Queries
// BAD: AI loves this pattern
users.forEach(async user => {
const posts = await getPosts(user.id) // N queries!
})
// GOOD: Batch
const posts = await getPostsForUsers(userIds)
Unnecessary Re-renders (React)
// BAD: New object every render
<Component style={{ margin: 10 }} />
// GOOD: Stable reference
const style = useMemo(() => ({ margin: 10 }), [])
Memory Leaks
// BAD: Missing cleanup
useEffect(() => {
const interval = setInterval(fetch, 1000)
// No cleanup!
}, [])
// GOOD: Cleanup
useEffect(() => {
const interval = setInterval(fetch, 1000)
return () => clearInterval(interval)
}, [])
Blocking Operations
// BAD: Sync file operations
const data = fs.readFileSync(path)
// GOOD: Async
const data = await fs.promises.readFile(path)
AI generates "works now" code, not "maintainable" code.
Check for:
Magic Numbers/Strings
// BAD
if (status === 3) { ... }
// GOOD
if (status === OrderStatus.SHIPPED) { ... }
Inconsistent Patterns
// AI might mix patterns
const getUser = async () => {} // Arrow function
async function getPost() {} // Function declaration
Missing Types (TypeScript)
// BAD: AI uses 'any' when uncertain
function process(data: any) { ... }
// GOOD: Proper types
function process(data: ProcessInput) { ... }
Dead Code
// AI sometimes includes unused variables/imports
import { unused } from './utils'
const temp = calculate() // Never used
AI doesn't know your codebase deeply.
Check for:
- Does it compile/run?
- Any obvious red flags?
- Right general approach?
- User input handling?
- Database queries?
- Authentication/authorization?
- Sensitive data exposure?
- Edge cases handled?
- Null/undefined checks?
- Error scenarios?
- Loop bounds correct?
- Follows project patterns?
- Uses existing utilities?
- Correct imports?
- Consistent naming?
- Obvious inefficiencies?
- Unnecessary operations?
- Memory management?
- Async patterns?
await on async functionsthis context in callbacksas any)__init__.py awareness# TypeScript: Compile check
npx tsc --noEmit
# ESLint: Style and common errors
npx eslint src/new-file.ts
# Tests: Run affected tests
npm test -- --findRelatedTests src/new-file.ts
# Security: Quick scan
npx audit-ci --moderate
Regenerate if:
Fix manually if:
When the review passes:
## AI Code Review
**Generated by:** [Claude/GPT/etc]
**Reviewed by:** [Your name]
**Date:** [Date]
### Changes Made After Review
- Fixed null check on line 45
- Added input validation
- Replaced magic number with constant
### Verified
- [x] Security review passed
- [x] Tests added/passing
- [x] Follows project conventions