From agents
Performance analysis: complexity estimation, profiler output parsing, caching design, regression risk. Use for optimization guidance. NOT for running profilers, load tests, or monitoring.
npx claudepluginhub wyattowalsh/agents --plugin agentsThis skill uses the workspace's default tool permissions.
Analysis-based performance review. Every recommendation grounded in evidence.
data/anti-patterns.jsondata/caching-strategies.jsondata/complexity-patterns.jsonevals/analyze-complexity.jsonevals/cache-design.jsonevals/complexity-analysis.jsonevals/implicit-trigger.jsonevals/negative-control.jsonevals/profile-interpretation.jsonevals/regression-risk.jsonreferences/anti-patterns.mdreferences/benchmark-methodology.mdreferences/caching-strategies.mdreferences/complexity-patterns.mdreferences/leak-patterns.mdreferences/profiler-guide.mdscripts/benchmark-designer.pyscripts/complexity-estimator.pyscripts/profile-parser.pytemplates/dashboard.htmlCreates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Analysis-based performance review. Every recommendation grounded in evidence. 6-mode pipeline: Analyze, Profile, Cache, Benchmark, Regression, Leak-Patterns.
Scope: Performance analysis and recommendations only. NOT for running profilers, executing load tests, infrastructure monitoring, or actual memory leak detection. This skill provides analysis-based guidance, not measurements.
Use these terms exactly throughout all modes:
| Term | Definition |
|---|---|
| complexity | Big-O algorithmic classification of a function or code path |
| hotspot | Code region with disproportionate resource consumption (time or memory) |
| bottleneck | System constraint limiting overall throughput |
| profiler output | Textual data from cProfile, py-spy, perf, or similar tools pasted by user |
| cache strategy | Eviction policy + write policy + invalidation approach for a caching layer |
| benchmark skeleton | Template code for measuring function performance with proper methodology |
| regression risk | Likelihood that a code change degrades performance, scored LOW/MEDIUM/HIGH/CRITICAL |
| anti-pattern | Known performance-harmful code pattern (N+1, unbounded allocation, etc.) |
| evidence | Concrete proof: AST analysis, profiler data, code pattern match, or external reference |
| recommendation | Actionable optimization suggestion with expected impact and trade-offs |
| flame graph | Hierarchical visualization of call stack sampling data |
| wall time | Elapsed real time (includes I/O waits) vs CPU time (compute only) |
| $ARGUMENTS | Mode |
|---|---|
analyze <file/function> | Algorithmic complexity analysis, Big-O review |
profile <data> | Interpret textual profiler output (cProfile, py-spy, perf) |
cache <system> | Caching strategy design (LRU/LFU/TTL/write-through/write-back) |
benchmark <code> | Benchmark design and methodology review |
regression <diff> | Performance regression risk assessment from code diff |
leak-patterns | Common memory leak pattern scan (NOT actual detection) |
| Empty | Show mode menu with examples for each mode |
Algorithmic complexity analysis for files or functions.
Run the complexity estimator script:
uv run python skills/performance-profiler/scripts/complexity-estimator.py <path>
Parse JSON output. If script fails, perform manual AST-level analysis.
For each function in scope:
references/complexity-patterns.mdPresent findings as a table:
| Function | Estimated Complexity | Evidence | Hotspot Risk | Recommendation |
|---|
Include trade-off analysis for each recommendation.
Interpret textual profiler output pasted by the user.
Run the profile parser script on user-provided data:
uv run python skills/performance-profiler/scripts/profile-parser.py --input <file>
If data is inline, save to temp file first. Parse JSON output.
From parsed data:
For each hotspot, provide:
references/anti-patterns.mdDesign caching strategies for a described system.
Ask about or infer from code:
Use references/caching-strategies.md decision tree:
| Factor | LRU | LFU | TTL | Write-Through | Write-Back |
|---|---|---|---|---|---|
| Read-heavy, stable working set | Good | Best | OK | -- | -- |
| Write-heavy | -- | -- | -- | Safe | Fast |
| Strict freshness | -- | -- | Best | Best | Risky |
| Memory-constrained | Best | Good | OK | -- | -- |
Deliver: eviction policy, write policy, invalidation strategy, warm-up approach, monitoring recommendations. Include capacity planning formula.
Design benchmarks and review methodology.
Run the benchmark designer script:
uv run python skills/performance-profiler/scripts/benchmark-designer.py --function <signature> --language <lang>
Parse JSON output for setup code, benchmark code, iterations, warmup.
Validate against benchmark best practices:
Provide complete benchmark code with methodology notes, expected metrics, and interpretation guide.
Assess performance regression risk from a code diff.
If path provided, read the diff. If git range provided, run git diff. Identify changed functions and their call sites.
For each changed function:
| Risk Factor | Weight | Check |
|---|---|---|
| Complexity increase | 3x | Loop nesting added, algorithm changed |
| Hot path change | 3x | Function called in request/render path |
| Data structure change | 2x | Collection type or size assumptions changed |
| I/O pattern change | 2x | New network/disk calls, removed batching |
| Memory allocation | 1x | New allocations in loops, larger buffers |
Risk score = sum of (weight * severity). Map to LOW/MEDIUM/HIGH/CRITICAL.
Present regression risk matrix with:
Scan for common memory leak patterns. Static analysis only -- NOT actual leak detection.
Read target files and check against patterns in references/leak-patterns.md:
For each potential leak pattern found:
| Pattern | Language | Severity | False Positive Risk |
|---|
Present findings with code citations, explain why each pattern risks leaking, and suggest fixes. Acknowledge that static analysis has high false positive rates -- recommend actual profiling tools for confirmation.
| Scope | Strategy |
|---|---|
| Single function | Direct analysis, inline report |
| Single file (< 500 LOC) | Script-assisted analysis, structured report |
| Multiple files / module | Parallel subagents per file, consolidated report |
| Full codebase | Prioritize entry points and hot paths, sample-based analysis |
Load ONE reference at a time. Do not preload all references into context.
| File | Content | Read When |
|---|---|---|
references/complexity-patterns.md | Code pattern to Big-O mapping with examples | Mode 1 (Analyze) |
references/caching-strategies.md | Caching decision tree, eviction policies, trade-offs | Mode 3 (Cache) |
references/anti-patterns.md | Performance anti-patterns catalog (N+1, unbounded alloc, etc.) | Mode 2 (Profile), Mode 5 (Regression), Mode 6 (Leak) |
references/leak-patterns.md | Memory leak patterns by language (Python, JS, Go, Java) | Mode 6 (Leak-Patterns) |
references/profiler-guide.md | Profiler output interpretation, flame graph reading | Mode 2 (Profile) |
references/benchmark-methodology.md | Benchmark design best practices, statistical methods | Mode 4 (Benchmark) |
| Script | When to Run |
|---|---|
scripts/complexity-estimator.py | Mode 1 — static complexity analysis via AST |
scripts/profile-parser.py | Mode 2 — parse cProfile/pstats textual output to JSON |
scripts/benchmark-designer.py | Mode 4 — generate benchmark skeleton from function signature |
| Template | When to Render |
|---|---|
templates/dashboard.html | After any mode — inject results JSON into data tag |
| File | Content |
|---|---|
data/complexity-patterns.json | Code pattern to Big-O mapping (machine-readable) |
data/caching-strategies.json | Caching decision tree (machine-readable) |
data/anti-patterns.json | Performance anti-patterns catalog (machine-readable) |
[file:line] — no generic warnings