Performance optimization through systematic profiling, bottleneck identification, and speed improvements
Analyzes performance bottlenecks through profiling and applies incremental optimizations with benchmarking.
/plugin marketplace add avovello/cc-plugins/plugin install optimize@cc-pluginsPurpose: Performance optimization through systematic profiling, bottleneck identification, and measurable speed improvements
The optimize command addresses performance bottlenecks through data-driven optimization. Research shows that 80% of performance issues come from 20% of code, making targeted optimization critical.
Key Distinction: Optimize improves speed and efficiency, while Refactor improves maintainability without changing performance, and Bugfix repairs broken behavior.
Check target exists
Determine optimization scope
Identify project type
Run performance-profiler agent to:
Output:
optimize-analysis/
├── BASELINE_METRICS.md # Current performance numbers
├── PROFILING_DATA.md # Detailed profiling results
├── SLOW_OPERATIONS.md # Operations exceeding thresholds
└── RESOURCE_USAGE.md # CPU, memory, I/O usage
Run bottleneck-identifier agent to:
Output: Prioritized list of bottlenecks with estimated improvements
Run appropriate specialized agents based on bottleneck types:
Database Issues → query-optimizer agent:
Caching Opportunities → cache-strategist agent:
Algorithm Issues → code-optimizer agent:
Output: OPTIMIZATION_PLAN.md with step-by-step approach
Present plan to user:
Wait for approval before proceeding.
For each optimization:
Benchmark Before
Apply Optimization
Benchmark After
Run Tests
If Improved: ✅
If No Improvement or Regression: ❌
Run load-tester agent to:
Success Criteria:
Run benchmark-validator agent to:
Success Criteria:
Generate documentation:
optimize-output/
├── OPTIMIZATION_SUMMARY.md # What was optimized and why
├── IMPROVEMENTS.md # Before/after metrics
├── BENCHMARKS.md # Detailed benchmark results
├── COMMIT_MESSAGE.md # Suggested commit message
└── MONITORING_RECOMMENDATIONS.md # What to monitor going forward
If requested, create git commit:
For each optimization:
Benchmark → Apply → Benchmark → (Improved?) → Commit → Next
↓ No improvement
Revert → Analyze → Try Alternative
Max Iterations: 20 optimizations per session Safety: Each optimization is benchmarked and tested
| Target | Rating |
|---|---|
| < 100ms | Excellent ✅ |
| 100-300ms | Good ✓ |
| 300-1000ms | Acceptable ⚠️ |
| > 1000ms | Poor - Optimize 🔴 |
| Target | Rating |
|---|---|
| < 1s | Excellent ✅ |
| 1-3s | Good ✓ |
| 3-5s | Acceptable ⚠️ |
| > 5s | Poor - Optimize 🔴 |
| Target | Rating |
|---|---|
| < 10ms | Excellent ✅ |
| 10-50ms | Good ✓ |
| 50-100ms | Acceptable ⚠️ |
| > 100ms | Poor - Optimize 🔴 |
Optional .claude/optimize-config.yaml:
optimize:
targets:
api_response_time_ms: 200 # Target p95 response time
page_load_time_ms: 2000 # Target page load
database_query_ms: 50 # Target query time
throughput_rps: 1000 # Requests per second
thresholds:
min_improvement_percent: 20 # Minimum improvement to accept
max_regression_percent: 5 # Maximum acceptable regression
benchmarking:
warm_up_runs: 10 # Warm-up iterations
measurement_runs: 100 # Measurement iterations
load_test_duration_seconds: 300 # Load test duration
safety:
require_tests_pass: true # Require tests before/after
auto_revert_on_regression: true # Revert if performance degrades
/optimize "improve performance of /api/users endpoint"
# Output:
# ✅ Profiled /api/users endpoint (current: 450ms p95)
# ✅ Identified 3 bottlenecks:
# 1. N+1 query (300ms) - HIGH IMPACT
# 2. Missing index on user_roles (100ms) - MEDIUM IMPACT
# 3. Eager loading all fields (50ms) - LOW IMPACT
#
# ✅ Step 1/3: Fix N+1 query with eager loading
# Before: 450ms → After: 150ms (67% faster) ✅
#
# ✅ Step 2/3: Add index on user_roles.user_id
# Before: 150ms → After: 80ms (47% faster) ✅
#
# ✅ Step 3/3: Implement field selection
# Before: 80ms → After: 65ms (19% faster) ✅
#
# Overall improvement: 450ms → 65ms (86% faster) 🚀
# All 87 tests pass ✅
/optimize "improve database query performance in src/services/"
# Output:
# ✅ Analyzed 234 database queries
# ✅ Found 12 slow queries (>100ms)
# ✅ Identified optimization opportunities:
# - 5 missing indexes
# - 3 N+1 query patterns
# - 4 queries returning unnecessary data
#
# ✅ Step 1/12: Add index on orders.customer_id
# Query time: 450ms → 12ms (96% faster) ✅
#
# ✅ Step 2/12: Fix N+1 in getOrdersWithItems
# Query count: 1+N → 2 queries (reduced from 150 to 2) ✅
# Time: 800ms → 45ms (94% faster) ✅
# ...
#
# Total queries optimized: 12
# Average improvement: 88% faster
# All 156 tests pass ✅
/optimize "reduce React component re-renders in dashboard"
# Output:
# ✅ Profiled React dashboard (current: 3.2s initial render)
# ✅ Identified performance issues:
# - Unnecessary re-renders: 847 renders for 12 components
# - Large component tree: 45 nested levels
# - Missing memoization: 15 expensive calculations
#
# ✅ Step 1/5: Add React.memo to list components
# Renders: 847 → 23 (97% reduction) ✅
#
# ✅ Step 2/5: Use useMemo for expensive calculations
# Calculation time: 1.2s → 5ms (99.6% faster) ✅
#
# ✅ Step 3/5: Implement virtual scrolling for long lists
# Render time: 1.8s → 120ms (93% faster) ✅
# ...
#
# Overall: 3.2s → 0.4s initial render (87.5% faster) 🚀
# Lighthouse score: 45 → 92 ✅
/optimize "improve performance of findDuplicates function"
# Output:
# ✅ Analyzed findDuplicates algorithm
# ✅ Current complexity: O(n²) nested loops
# ✅ Current performance: 12.5s for 10,000 items
#
# ✅ Optimization: Replace nested loops with Set-based approach
# Complexity: O(n²) → O(n)
# Performance: 12.5s → 45ms (99.6% faster) 🚀
#
# Benchmark results (10,000 items):
# Before: 12,500ms
# After: 45ms
# Improvement: 277x faster
#
# All 34 tests pass ✅
/optimize "add caching to user profile API"
# Output:
# ✅ Analyzed /api/profile endpoint (current: 250ms p95)
# ✅ Identified caching opportunity:
# - User data changes infrequently
# - High read/write ratio (1000:1)
# - Cache hit rate potential: 95%
#
# ✅ Designed cache strategy:
# - Redis cache with 5-minute TTL
# - Cache warming for active users
# - Invalidation on user updates
#
# ✅ Implemented Redis caching
# Cache hit: 5ms
# Cache miss: 250ms (writes to cache)
# Expected p95: 25ms (90% faster)
#
# ✅ Load test results:
# With 95% cache hit rate: 25ms p95 ✅
# Throughput: 500 rps → 5,000 rps (10x) 🚀
#
# All 67 tests pass ✅
Before completing optimization:
If any gate fails:
optimize-output/
├── BASELINE_METRICS.md # Initial performance
├── OPTIMIZATION_PLAN.md # Step-by-step plan
├── OPTIMIZATION_SUMMARY.md # What was optimized
├── IMPROVEMENTS.md # Before/after metrics
├── BENCHMARKS.md # Detailed benchmark results
├── LOAD_TEST_RESULTS.md # Performance under load
├── COMMIT_MESSAGE.md # Suggested commit message
└── MONITORING_RECOMMENDATIONS.md # What to monitor
/audit identifies slow operations → /optimize speeds them upThe plugin supports these optimization strategies:
Database:
Algorithms:
Caching:
Frontend:
Infrastructure: