Analyzes profiling data to identify performance bottlenecks, categorize them by type, and prioritize optimization efforts using the 80/20 rule.
Analyzes profiling data to identify performance bottlenecks, categorize them by type, and prioritize optimization efforts using the 80/20 rule.
/plugin marketplace add avovello/cc-plugins/plugin install optimize@cc-pluginsAnalyzes profiling data to identify performance bottlenecks, categorize them by type, and prioritize optimization efforts using the 80/20 rule.
✅ DOES:
❌ DOES NOT:
Symptoms:
Common Causes:
Example:
# N+1 Query Problem
orders = Order.query.all() # 1 query
for order in orders:
customer = order.customer # N queries!
print(customer.name)
# Impact: 1 + 1000 = 1001 queries (if 1000 orders)
Potential Improvement: 90-99% faster (1001 queries → 2 queries)
Symptoms:
Common Causes:
Example:
// O(n²) - Nested loops
function findDuplicates(array) {
const duplicates = [];
for (let i = 0; i < array.length; i++) {
for (let j = i + 1; j < array.length; j++) {
if (array[i] === array[j]) {
duplicates.push(array[i]);
}
}
}
return duplicates;
}
// Impact: 10,000 items = 100,000,000 comparisons
Potential Improvement: 90-99% faster (O(n²) → O(n))
Symptoms:
Common Causes:
Potential Improvement: 80-95% faster (with 90% cache hit rate)
Symptoms:
Common Causes:
Potential Improvement: 50-90% faster
Symptoms:
Common Causes:
Potential Improvement: 70-95% faster
Symptoms:
Common Causes:
Potential Improvement: 40-80% improvement
**Input**: BASELINE_METRICS.md from performance-profiler
Key metrics to analyze:
- API response times (identify slow endpoints)
- Database query times (identify slow queries)
- CPU profile (identify hot functions)
- Memory profile (identify memory hogs)
- Frontend metrics (identify render bottlenecks)
Find the 20% causing 80% of performance issues:
## Time Budget Analysis
**Total Request Time**: 450ms (for /api/orders)
**Time Breakdown**:
1. Database queries: 320ms (71%) ← **PRIMARY BOTTLENECK**
2. Business logic: 80ms (18%)
3. Serialization: 30ms (7%)
4. Authentication: 20ms (4%)
**Conclusion**: Database queries are the #1 bottleneck - optimize first
## Bottleneck: Slow /api/orders endpoint (450ms)
**Category**: Database (N+1 Query)
**Root Cause**:
```python
# Current code
orders = Order.query.all() # 1 query
for order in orders:
items = order.items # N queries - BOTTLENECK!
customer = order.customer # N queries - BOTTLENECK!
Evidence:
Type: N+1 Query Pattern
Impact: CRITICAL
### 4. Estimate Potential Improvement
```markdown
## Optimization Potential
**Current Performance**: 450ms (201 queries)
**After Optimization** (eager loading):
```python
orders = Order.query.options(
joinedload(Order.items),
joinedload(Order.customer)
).all() # 1 query with JOINs
Expected Performance: 130ms (3 queries)
Improvement: 71% faster (450ms → 130ms)
Calculation:
### 5. Estimate Effort
```markdown
## Effort Estimate
**Complexity**: LOW
- Code change: ~5 lines (add eager loading)
- Testing: Use existing test suite
- Risk: Low (query results identical, just fetched differently)
**Time Estimate**: 30-60 minutes
- Implementation: 15 min
- Testing: 15 min
- Code review: 15 min
- Deployment: 15 min
**Expertise Required**: Intermediate (understanding of ORMs, JOINs)
## Priority Calculation
**Impact Score**: 9/10
- Time saved: 240ms (71% of request time)
- Volume: 234 requests/hour
- Total time saved: 234 × 240ms = 56,160ms = 56 seconds per hour
**Effort Score**: 2/10 (low effort)
**Priority**: Impact / Effort = 9 / 2 = **4.5** (VERY HIGH)
**Classification**: 🚀 QUICK WIN (high impact, low effort)
# Bottleneck Analysis Report
**Date**: 2025-01-15
**Source**: Performance profile from 2025-01-15
**Target**: /api/orders endpoint
## Executive Summary
- **Total Bottlenecks Identified**: 8
- **Quick Wins**: 3 (high impact, low effort)
- **Total Potential Improvement**: 78% faster (450ms → 100ms)
- **Recommended Starting Point**: Fix N+1 queries (71% improvement)
## Critical Bottlenecks (Priority > 4.0)
### 🚀 #1: N+1 Queries in Order Retrieval
**Priority**: 4.5 (HIGHEST)
**Category**: Database - N+1 Query Pattern
**Current Performance**: 320ms (71% of request time)
**Root Cause**:
```python
# Fetching orders without eager loading
orders = Order.query.all() # 1 query
for order in orders:
items = order.items # N queries!
customer = order.customer # N queries!
Query Pattern:
Impact:
Optimization Strategy:
# Use eager loading with JOIN
orders = Order.query.options(
joinedload(Order.items),
joinedload(Order.customer)
).all()
Expected Result:
Effort: 30-60 minutes (LOW)
ROI: 🚀 VERY HIGH - Quick win!
Priority: 4.0
Category: Database - Missing Index
Current Performance: 120ms per query
Root Cause: Full table scan on orders table (50,000 rows)
Evidence:
EXPLAIN SELECT * FROM orders WHERE customer_id = 123;
-- Seq Scan on orders (cost=0.00..1234.00 rows=50000)
Optimization Strategy: Add index
CREATE INDEX idx_orders_customer_id ON orders(customer_id);
Expected Result:
Effort: 15 minutes (VERY LOW)
ROI: 🚀 EXCELLENT - Quick win!
Priority: 3.0
Category: Algorithm Complexity (O(n²))
Current Performance: 80ms (18% of request time)
Root Cause:
function calculateTotalPrice(items, discounts) {
let total = 0;
for (let item of items) {
for (let discount of discounts) { // O(n²)!
if (discount.applies(item)) {
total += item.price * (1 - discount.rate);
}
}
}
return total;
}
Optimization Strategy: Use hash map for O(n) lookup
function calculateTotalPrice(items, discounts) {
// Build discount map once: O(n)
const discountMap = new Map();
discounts.forEach(d => discountMap.set(d.productId, d));
// Single pass: O(n)
return items.reduce((total, item) => {
const discount = discountMap.get(item.productId) || 0;
return total + item.price * (1 - discount);
}, 0);
}
Expected Result:
Effort: 1-2 hours (MEDIUM)
ROI: ⚡ HIGH
[Continue for all bottlenecks...]
Priority: 2.0
Category: Network - No Compression
Current Impact: 30ms serialization + network transfer
Optimization: Enable gzip compression Expected Improvement: 60% smaller payloads, 40% faster transfer
Effort: 30 minutes
| Priority | Bottleneck | Category | Current | Potential | Effort | ROI |
|---|---|---|---|---|---|---|
| 4.5 | N+1 queries | Database | 320ms | 75% faster | 1h | 🚀 |
| 4.0 | Missing index | Database | 120ms | 96% faster | 15m | 🚀 |
| 3.0 | O(n²) algorithm | Algorithm | 80ms | 81% faster | 2h | ⚡ |
| 2.0 | No compression | Network | 30ms | 40% faster | 30m | ⚡ |
| 1.5 | Large result sets | Database | 25ms | 50% faster | 3h | - |
Fix N+1 queries (Priority 4.5)
Add missing index (Priority 4.0)
Phase 1 Total: Approx. 1.25 hours, 71% overall improvement
Optimize algorithm (Priority 3.0)
Enable compression (Priority 2.0)
Phase 2 Total: Approx. 2.5 hours, additional 12% improvement
Final Expected Performance: 450ms → 100ms (78% faster)
Use this agent when analyzing conversation transcripts to find behaviors worth preventing with hooks. Examples: <example>Context: User is running /hookify command without arguments user: "/hookify" assistant: "I'll analyze the conversation to find behaviors you want to prevent" <commentary>The /hookify command without arguments triggers conversation analysis to find unwanted behaviors.</commentary></example><example>Context: User wants to create hooks from recent frustrations user: "Can you look back at this conversation and help me create hooks for the mistakes you made?" assistant: "I'll use the conversation-analyzer agent to identify the issues and suggest hooks." <commentary>User explicitly asks to analyze conversation for mistakes that should be prevented.</commentary></example>