From add
Identifies and fixes performance bottlenecks by profiling code, suggesting targeted optimizations, implementing with TDD, and verifying improvements. Supports backend (Python, Go, Node), frontend (JS), full scopes.
npx claudepluginhub mountainunicorn/add --plugin addThis skill is limited to using the following tools:
Identify and fix performance bottlenecks. This skill profiles code to find slow operations, suggests optimizations, and implements them with test-driven discipline.
Orchestrates performance optimization workflows using static code analysis to detect bottlenecks like N+1 queries, missing indexes, O(n^2) algorithms, blocking I/O, and memory leaks. Accepts profiling data.
Analyzes and optimizes app performance across frontend, backend, and database layers. Profiles Node.js/Python/web apps; reduces bundle sizes, improves queries/rendering for slowness and load times.
Orchestrates performance fixes via static analysis for bottlenecks like N+1 queries, missing indexes, O(n^2) algorithms, blocking I/O, memory leaks. Accepts profiling data; runs spec/plan/impl/verify phases.
Share bugs, ideas, or general feedback.
Identify and fix performance bottlenecks. This skill profiles code to find slow operations, suggests optimizations, and implements them with test-driven discipline.
The Optimize skill improves application performance while maintaining correctness. It:
Performance optimization follows TDD discipline: write tests that capture performance expectations before implementing optimizations.
Verify implementation exists
Load configuration
Determine scope
Check for profiling data
Identify profiling tools
Check for session handoff
.add/handoff.md if it existsEstablish baseline measurements:
For Backend (Python):
# Profile with py-spy
py-spy record -o profile.svg -- python -m pytest tests/
# Or use cProfile
python -m cProfile -s cumulative app.py > profile.txt
# Database queries
EXPLAIN ANALYZE SELECT ...
Capture metrics:
For Frontend (JavaScript):
# Lighthouse CLI
npm install -g lighthouse
lighthouse https://localhost:3000 --output json --output html
# Or use Chrome DevTools
# DevTools → Performance tab → Record
Capture metrics:
For Full System:
Create Baseline Report:
# Performance Baseline
## Backend Metrics
- Average response time: 250ms (target: 100ms) ⚠
- 95th percentile response: 800ms (target: 200ms) ⚠
- Peak memory: 120MB (target: 50MB) ⚠
- Database query time: 180ms average (target: 50ms) ⚠
- Cache hit rate: 62% (target: 85%) ⚠
## Frontend Metrics
- First Contentful Paint: 2.1s (target: 1.5s) ⚠
- Largest Contentful Paint: 3.2s (target: 2.5s) ⚠
- JavaScript execution: 450ms (target: 200ms) ⚠
- Bundle size: 580KB (target: 300KB) ⚠
## Identified Bottlenecks (Priority Order)
1. Database query in /api/list endpoint (180ms)
2. JavaScript bundle parsing (150ms)
3. Form validation on every keystroke (50ms)
4. Unoptimized images (120KB uncompressed)
5. Missing indexes on user lookup
For each bottleneck:
Quantify the impact
Estimate improvement potential
Calculate ROI
Group by category
Example prioritization:
Bottleneck | Time | Impact | Effort | ROI | Priority |
-----------|------|--------|--------|-----|----------|
DB indexes | 180ms | High | 2h | 90x | 1 |
Bundle split | 150ms | High | 4h | 37x | 2 |
Validation debounce | 50ms | Low | 1h | 50x | 3 |
Image optimization | 120ms | Medium | 3h | 40x | 4 |
Caching | 100ms | Medium | 3h | 33x | 5 |
For each optimization:
Specify the optimization
Plan with TDD
assertLatency(operation, maxMs)Consider trade-offs
Document approach
Example optimization plan:
# Optimization 1: Add database index on user_id
Bottleneck: User lookup query takes 180ms
Current: SELECT * FROM orders WHERE user_id = ?
(full table scan on 1M rows)
Optimization: CREATE INDEX idx_orders_user_id ON orders(user_id)
Expected improvement: 180ms → 5ms (35x faster)
Trade-offs: Slight write performance impact, +10MB storage
Test plan:
- test_list_orders_within_performance_budget() (max 50ms)
- Verify with: pytest test_perf.py::test_list_orders_latency
Fallback: Drop index if causes write performance issues
For each optimization, follow TDD discipline:
Step 4a: Write Performance Test
import pytest
import time
def test_list_orders_performance():
"""Orders list should return < 50ms for 1000 orders"""
# Arrange: Create test data
user = create_test_user()
for i in range(1000):
create_test_order(user)
# Act: Time the operation
start = time.perf_counter()
orders = get_user_orders(user)
duration = time.perf_counter() - start
# Assert: Verify performance
assert duration < 0.050, f"Got {duration*1000:.1f}ms, target < 50ms"
assert len(orders) == 1000
Step 4b: Run Test - Verify It Fails
pytest test_perf.py::test_list_orders_performance -v
# FAILED - got 180ms, target 50ms
Step 4c: Implement Optimization
For database index optimization:
-- migrations/001_add_user_index.sql
CREATE INDEX idx_orders_user_id ON orders(user_id);
For algorithm optimization:
# Before: O(n²) algorithm
def find_duplicates(items):
duplicates = []
for i, item in enumerate(items):
for j in range(i+1, len(items)):
if items[i] == items[j]:
duplicates.append(item)
return duplicates
# After: O(n) algorithm
def find_duplicates(items):
seen = set()
duplicates = set()
for item in items:
if item in seen:
duplicates.add(item)
seen.add(item)
return list(duplicates)
For bundle optimization:
// Before: Single large bundle (580KB)
import * as app from './app.js'; // all code
// After: Code splitting
// main.js (180KB)
import { HomePage } from './pages/home.js';
// admin.js (150KB) - loaded on demand
// reports.js (120KB) - loaded on demand
Step 4d: Run Test - Verify It Passes
pytest test_perf.py::test_list_orders_performance -v
# PASSED - got 8ms, target 50ms ✓
Step 4e: Verify No Regression
Run full test suite to ensure optimization doesn't break anything:
npm test # or pytest
# All tests pass ✓
After implementing optimization:
Measure actual improvement
Before: 180ms average
After: 8ms average
Improvement: 95.6% faster (22.5x improvement)
Target met: ✓ (8ms < 50ms target)
Update performance baseline
Document in code
def get_user_orders(user_id):
"""Retrieve user's orders.
Performance: < 10ms (uses index on user_id)
Note: This uses an indexed query to keep latency low.
If you need to add more WHERE clauses, ensure they
are also indexed or consider query optimization.
See: docs/performance.md for optimization history
"""
return db.query(
"SELECT * FROM orders WHERE user_id = ? ORDER BY created DESC",
[user_id]
)
Create performance documentation
# Performance Documentation
## Performance Targets
- Page load time: < 2s
- API response: < 100ms (p95)
- Bundle size: < 300KB
## Optimization History
### 2025-02-07: Database Index on user_id
- Problem: User orders query took 180ms
- Solution: Added index on orders.user_id
- Result: Now 8ms (22.5x improvement)
- Test: test_list_orders_performance
- Status: ✓ Deployed to production
### 2025-02-06: Code splitting
- Problem: Bundle size 580KB, slow initial load
- Solution: Split into main (180KB) + lazy chunks
- Result: FCP reduced from 2.1s to 1.2s
- Test: test_first_contentful_paint
- Status: ✓ Deployed to production
## Current Performance Profile
- Average API response: 45ms (target: 100ms) ✓
- Page load FCP: 1.2s (target: 1.5s) ✓
- Bundle size: 280KB (target: 300KB) ✓
- Memory peak: 45MB (target: 50MB) ✓
Upon completion, output:
# Performance Optimization Complete ✓
## Optimizations Applied
- Count: {N} optimizations implemented
- All tests passing: ✓
## Results Summary
| Optimization | Before | After | Improvement |
|--------------|--------|-------|-------------|
| Database query | 180ms | 8ms | 95.6% ↓ |
| Bundle size | 580KB | 280KB | 51.7% ↓ |
| Form validation latency | 50ms | 15ms | 70% ↓ |
## Performance Targets Status
- Page load (FCP): 1.2s (target: 1.5s) ✓
- API response (p95): 45ms (target: 100ms) ✓
- Bundle size: 280KB (target: 300KB) ✓
- Memory peak: 45MB (target: 50MB) ✓
## Scope Optimized
- Backend: ✓
- Frontend: ✓
- Database: ✓
## Tests Updated
- Performance tests: {count} new tests
- All tests passing: {count}/{count}
## Documentation
- Performance profile: docs/performance.md
- Optimization log: [detailed results]
## Next Steps
1. Review profile and optimization decisions
2. Run full quality gates: /add:verify --level deploy
3. Deploy to staging and monitor
4. If all good, deploy to production
## Performance Test Results
- test_list_orders_performance: PASS (8ms)
- test_form_validation_latency: PASS (15ms)
- test_bundle_load_time: PASS (0.8s)
- test_memory_usage: PASS (45MB)
Use TaskCreate and TaskUpdate to report progress through the CLI spinner. Create tasks at the start of each major phase and mark them completed as they finish.
Tasks to create:
| Phase | Subject | activeForm |
|---|---|---|
| Profile | Profiling performance baseline | Profiling performance... |
| Bottlenecks | Identifying bottlenecks | Identifying bottlenecks... |
| Optimize | Applying optimizations | Applying optimizations... |
| Verify | Verifying performance improvements | Verifying improvements... |
Mark each task in_progress when starting and completed when done. This gives the user real-time visibility into skill execution.
Performance test not passing after optimization
Optimization breaks other tests
Unable to reach target
Profiling tools not available
{
"perf": {
"pageLoadTime": 2.0,
"responseTime": 100,
"memoryLimit": 50,
"bundleSize": 300,
"profiler": "chrome-devtools",
"targetPlatforms": ["web", "mobile"]
}
}
Test real scenarios
Measure consistently
Set realistic targets
Monitor regressions
Document trade-offs