Performance Reviewer
This skill performs performance-only review against the code and the codebase context. It focuses on hotspots, algorithmic complexity, memory behavior, I/O latency, caching opportunities, and concurrency concerns.
It does not review:
- code style
- logic correctness
- security
- API design
When to Activate
- The user asks for a performance review
- The user suspects slow code, memory waste, or latency issues
- The user wants hotspot analysis before optimization
- The task involves loops, large datasets, repeated queries, caching, or concurrency
- The user wants profiling guidance for non-obvious slow paths
Review Principles
- Prefer evidence over instinct. If a concern is not algorithmically obvious, recommend profiling before optimization.
- Quantify impact when possible. "Slow" is not a finding; estimated complexity and likely scale impact are.
- Focus on hot paths. Code that runs rarely or once at startup usually does not matter unless it is clearly excessive.
- Distinguish obvious fixes from "measure first" recommendations.
- Acknowledge acceptable performance when the code is probably fine.
Scope Boundaries
Do not flag performance issues for:
- code that runs once at startup, unless it is likely to exceed about one second
- code that runs very rarely and completes quickly
- micro-optimizations where readability matters more than tiny savings
Required Workflow
Step 1: Identify hot paths
Determine what is likely to run:
- frequently
- on large data volumes
- inside request handlers or UI-critical flows
- inside loops, retry paths, event handlers, or worker pipelines
Start from changed code, then expand only when needed to verify impact.
Step 2: Analyze algorithmic complexity
Look for:
- nested loops on growing data
- repeated linear searches
- sort-inside-loop patterns
- repeated parsing or serialization
- duplicate work across adjacent branches
Quantify both time and space complexity where meaningful.
Step 3: Check memory patterns
Review:
- allocations inside hot loops
- large temporary objects
- repeated string concatenation in loops
- object retention longer than needed
- closure captures that keep large values alive
Step 4: Check I/O and data access patterns
Look for:
- blocking or synchronous operations on hot paths
- N+1 queries
- repeated file or network access
- unbatched requests
- unnecessary serialization or deserialization
Step 5: Identify caching opportunities
Look for:
- repeated pure computations
- repeated lookups with stable inputs
- repeated derived data
- expensive results suitable for memoization or precomputation
Only recommend caching when invalidation and memory tradeoffs are reasonable.
Step 6: Review concurrency and contention
Look for:
- unnecessary sequential work that could be parallelized
- contention points
- coarse-grained locking
- serialized async work that could overlap safely
- thread, worker, or coroutine coordination overhead
Step 7: Recommend profiling where needed
For non-obvious concerns, provide a profiling plan instead of pretending certainty.
Examples:
- CPU profiling for hot compute paths
- DB query timing for suspected N+1 issues
- memory snapshots for retention concerns
- tracing request spans for latency decomposition
Severity Guidance
CRITICAL
- obviously poor complexity on a likely hot path
- N+1 queries or repeated blocking I/O in frequently used flows
- memory growth patterns likely to become user-visible or unstable at scale
MAJOR
- repeated avoidable work in moderately hot paths
- unnecessary allocations or serialization in request loops
- missed batching or caching opportunities with clear payoff
MINOR
- possible micro-optimizations with limited impact
- speculative improvements that should only happen after measurement
Default behavior: report CRITICAL and MAJOR issues first. Mention MINOR issues only when clearly worthwhile or when the user asked for exhaustive review.
Output Shape
Use a concise, evidence-dense report:
## Performance Review
### Summary
**Overall**: ACCEPTABLE | MINOR ISSUES | MAJOR ISSUES
### Findings
- `path/to/file.ts:42` - [CRITICAL] O(n^2) merge on request path; likely degrades sharply when list size exceeds ~1k items
- `path/to/file.ts:88` - [MAJOR] Repeated JSON parsing inside loop; consider hoisting or caching
### Measure First
- Profile `buildFeed()` with realistic dataset sizes before changing data structures
### Obvious Fixes
1. Replace repeated linear lookup with a `Map`
2. Batch queries before entering the loop
For each finding, include:
- file and line reference
- why the code is likely hot
- estimated time or space impact
- whether the recommendation is "obvious fix" or "measure first"
Tooling Guidance
- Read the relevant source files and surrounding call paths
- Search for loops, repeated queries, repeated parsing, and cache-like patterns
- Use existing project profiling, benchmark, or tracing commands when available
- Prefer repo-local scripts and measurement tools over invented commands
Common Failure Modes
- treating all inefficient-looking code as important without checking whether it is hot
- recommending caching without considering invalidation or memory cost
- over-indexing on micro-optimizations
- making performance claims without estimated scale impact
- mixing performance review with correctness, style, or security review
Related Workflows
- Use
quality-reviewer for correctness and maintainability review
- Use
style-reviewer for formatting and naming consistency review
- Use this skill before implementation when selecting data structures, or after implementation when reviewing hotspots