Establishes performance baselines through systematic profiling of application performance, measuring response times, throughput, and resource usage.
Profile application performance to establish baselines by measuring response times, throughput, and resource usage for APIs, databases, and frontend components. Use this when you need to identify slow operations before optimization, capturing metrics like p50/p95/p99 latencies, CPU/memory consumption, and request rates.
/plugin marketplace add avovello/cc-plugins/plugin install optimize@cc-pluginsEstablishes performance baselines through systematic profiling of application performance, measuring response times, throughput, and resource usage.
ā DOES:
ā DOES NOT:
Tools by Language:
# Node.js
npm install clinic
clinic doctor -- node server.js
# Python
pip install py-spy
py-spy record -o profile.svg -- python app.py
# PHP
php -d xdebug.mode=profile script.php
# Go
go test -cpuprofile=cpu.prof -memprofile=mem.prof
Metrics to Capture:
PostgreSQL:
-- Enable query logging
SET log_min_duration_statement = 100; -- Log queries > 100ms
-- Analyze query performance
EXPLAIN ANALYZE SELECT ...;
-- Check slow query log
SELECT query, calls, mean_exec_time, max_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;
MySQL:
-- Enable slow query log
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 0.1; -- 100ms
-- Profile query
EXPLAIN SELECT ...;
MongoDB:
// Enable profiling
db.setProfilingLevel(2); // Profile all queries
// Check slow queries
db.system.profile.find({millis: {$gt: 100}}).sort({ts: -1});
Chrome DevTools:
// Performance API
const start = performance.now();
// ... code to profile
const end = performance.now();
console.log(`Execution time: ${end - start}ms`);
// React DevTools Profiler
import { Profiler } from 'react';
function onRenderCallback(
id, phase, actualDuration, baseDuration,
startTime, commitTime, interactions
) {
console.log(`${id} took ${actualDuration}ms`);
}
<Profiler id="MyComponent" onRender={onRenderCallback}>
<MyComponent />
</Profiler>
Lighthouse:
npm install -g lighthouse
lighthouse https://example.com --output=json --output-path=./report.json
Python:
import cProfile
import pstats
# Profile function
cProfile.run('my_function()', 'profile_stats')
# Analyze results
stats = pstats.Stats('profile_stats')
stats.sort_stats('cumulative')
stats.print_stats(20)
Node.js:
const { performance } = require('perf_hooks');
const start = performance.now();
myFunction();
const end = performance.now();
console.log(`Execution time: ${end - start}ms`);
PHP:
<?php
// XHProf profiling
xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY);
// Code to profile
myFunction();
// Get profiling data
$xhprof_data = xhprof_disable();
$xhprof_runs = new XHProfRuns_Default();
$run_id = $xhprof_runs->save_run($xhprof_data, "myapp");
Based on optimization target:
API Endpoint: Profile endpoint response time Database: Profile query execution time Frontend: Profile render time, interaction latency Function: Profile function execution time Full Application: Profile overall resource usage
# Install profiling tools
# Node.js
npm install autocannon clinic 0x
# Python
pip install py-spy memory_profiler line_profiler
# General
npm install -g lighthouse artillery
# API endpoint profiling (Node.js)
autocannon -c 10 -d 30 http://localhost:3000/api/users
# Application profiling (Python)
py-spy record -o profile.svg --duration 60 -- python app.py
# Database profiling
# Enable slow query log and run application normally
# Frontend profiling
lighthouse http://localhost:3000 --view
API Endpoints:
## GET /api/users
**Response Times**:
- Average: 145ms
- p50 (median): 120ms
- p95: 280ms
- p99: 450ms
- Max: 890ms
**Throughput**: 68 requests/second
**Status Codes**:
- 200: 99.2%
- 500: 0.8%
Database Queries:
## Top 10 Slowest Queries
1. SELECT * FROM orders JOIN customers... (avg: 450ms, calls: 1,234)
2. SELECT * FROM products WHERE category... (avg: 320ms, calls: 5,678)
3. UPDATE users SET last_login... (avg: 180ms, calls: 3,456)
...
Frontend:
## Dashboard Component
**Metrics**:
- First Contentful Paint: 1.2s
- Largest Contentful Paint: 2.8s
- Time to Interactive: 3.5s
- Total Blocking Time: 850ms
- Cumulative Layout Shift: 0.12
**Lighthouse Score**: 65/100
Flag operations exceeding thresholds:
## Slow Operations (>100ms)
### Critical (>1000ms) š“
1. POST /api/orders/process - 1,450ms p95
2. GET /api/analytics/report - 2,100ms p95
### High (500-1000ms) ā ļø
3. GET /api/users?include=orders - 650ms p95
4. Dashboard initial render - 880ms
### Medium (100-500ms)
5. GET /api/products?category=all - 280ms p95
6. Database: SELECT with JOIN on 3 tables - 320ms avg
...
CPU Usage:
## CPU Profile
**Top Functions by CPU Time**:
1. parseUserData() - 35% (2,450ms total)
2. calculateTotalPrice() - 22% (1,540ms total)
3. validateOrderItems() - 18% (1,260ms total)
4. serializeResponse() - 12% (840ms total)
5. authenticateRequest() - 8% (560ms total)
Memory Usage:
## Memory Profile
**Current Usage**: 450 MB
**Peak Usage**: 680 MB
**Allocated**: 1.2 GB total
**Top Allocators**:
1. OrderService.processOrder() - 125 MB
2. ProductCache.loadAll() - 85 MB
3. UserSession.create() - 45 MB
# Performance Profile Report
**Date**: 2025-01-15
**Duration**: 30 minutes
**Environment**: Production-like (staging)
## Executive Summary
- **API Response Time**: 145ms average (p95: 280ms)
- **Throughput**: 68 requests/second
- **Error Rate**: 0.8%
- **Slow Operations**: 12 operations exceeding thresholds
## Critical Issues š“
### 1. POST /api/orders/process
**Current Performance**: 1,450ms p95
**Threshold**: 200ms
**Impact**: Critical user-facing endpoint
**Volume**: 234 requests/hour
### 2. GET /api/analytics/report
**Current Performance**: 2,100ms p95
**Threshold**: 1,000ms
**Impact**: Admin dashboard blocking
**Volume**: 45 requests/hour
## API Endpoint Performance
### GET /api/users
- **Average**: 120ms
- **p50**: 95ms
- **p95**: 280ms
- **p99**: 450ms
- **Throughput**: 95 req/s
- **Status**: ā ļø p95 exceeds target (200ms)
### POST /api/orders
- **Average**: 340ms
- **p50**: 280ms
- **p95**: 650ms
- **p99**: 1,200ms
- **Throughput**: 28 req/s
- **Status**: š“ Critical - exceeds all thresholds
[Continue for all endpoints...]
## Database Performance
### Slow Queries (>100ms)
**Query 1**: SELECT * FROM orders WHERE user_id = ? (N+1 pattern)
- **Average Time**: 450ms
- **Calls**: 1,234/hour
- **Total Impact**: 555 seconds/hour
- **Status**: š“ Critical bottleneck
**Query 2**: SELECT * FROM products JOIN categories...
- **Average Time**: 320ms
- **Calls**: 5,678/hour
- **Total Impact**: 1,817 seconds/hour
- **Status**: š“ Critical bottleneck
## Frontend Performance
### Dashboard Page
- **First Contentful Paint**: 1.2s (target: <1s)
- **Time to Interactive**: 3.5s (target: <3s)
- **Lighthouse Score**: 65 (target: >90)
- **Status**: ā ļø Needs optimization
### Component Render Times
- **UserList**: 450ms (847 renders)
- **OrdersTable**: 320ms (234 renders)
- **AnalyticsChart**: 180ms (56 renders)
## Resource Usage
### CPU Profile
- **Average CPU**: 35%
- **Peak CPU**: 78%
- **Top Consumer**: parseUserData() - 35% of total time
### Memory Profile
- **Current**: 450 MB
- **Peak**: 680 MB
- **Potential Leak**: OrderService (+2.5 MB/hour)
## Recommendations
### High Priority
1. Optimize /api/orders/process endpoint (1,450ms ā target <200ms)
2. Fix N+1 query in orders retrieval
3. Add database indexes on frequently queried columns
### Medium Priority
4. Implement caching for product catalog
5. Optimize frontend rendering (reduce re-renders)
6. Investigate potential memory leak in OrderService
### Low Priority
7. Compress API responses
8. Implement CDN for static assets
9. Optimize image loading
## Next Steps
1. Run **bottleneck-identifier** to analyze root causes
2. Prioritize optimizations by impact
3. Create optimization plan
4. Implement and benchmark improvements
Before completing profiling:
Use this agent when analyzing conversation transcripts to find behaviors worth preventing with hooks. Examples: <example>Context: User is running /hookify command without arguments user: "/hookify" assistant: "I'll analyze the conversation to find behaviors you want to prevent" <commentary>The /hookify command without arguments triggers conversation analysis to find unwanted behaviors.</commentary></example><example>Context: User wants to create hooks from recent frustrations user: "Can you look back at this conversation and help me create hooks for the mistakes you made?" assistant: "I'll use the conversation-analyzer agent to identify the issues and suggest hooks." <commentary>User explicitly asks to analyze conversation for mistakes that should be prevented.</commentary></example>