Performance optimization with profiling, benchmarking, and verification
Optimizes performance through profiling, benchmarking, and incremental improvements for bundle, database, rendering, API, or memory targets.
/plugin marketplace add squirrelsoft-dev/agency/plugin install agency@squirrelsoft-dev-toolstarget (bundle/database/rendering/api/memory/auto)Performance optimization with profiling, benchmarking, and measurable improvements.
Optimize performance for target: $ARGUMENTS
Follow the optimization lifecycle: detect target → profile baseline → plan optimizations → implement incrementally → verify improvements → report results.
IMMEDIATELY activate the agency workflow patterns skill:
Use the Skill tool to activate: agency-workflow-patterns
This skill contains critical orchestration patterns, agent selection guidelines, and workflow strategies you MUST follow.
Quickly gather project context to select appropriate profiling tools and optimization strategies.
<!-- Component: prompts/context/framework-detection.md -->Execute framework detection to determine the application framework. This will guide profiling tool selection and optimization strategies.
Refer to framework detection component for full detection logic covering:
Store result in: FRAMEWORK variable
Execute database/ORM detection to identify the data layer technology. This is essential for database optimization targets.
Refer to database detection component for full detection logic covering:
Store result in: DATABASE variable
Execute build tool detection to identify the bundler/build system. Required for bundle optimization targets.
Refer to build tool detection component for full detection logic covering:
Store result in: BUILD_TOOL variable
Execute testing framework detection for benchmark verification.
Refer to testing framework detection component for full detection logic covering:
Store result in: TEST_FRAMEWORK variable
Use detected context to:
Analyze $ARGUMENTS to determine optimization target:
Bundle Optimization if:
$ARGUMENTS = "bundle" or "build" or "size"Database Optimization if:
$ARGUMENTS = "database" or "db" or "queries" or "sql"Rendering Optimization if:
$ARGUMENTS = "rendering" or "render" or "ui" or "ssr"API Optimization if:
$ARGUMENTS = "api" or "endpoint" or "latency" or "throughput"Memory Optimization if:
$ARGUMENTS = "memory" or "heap" or "leak"Auto-Detect if:
$ARGUMENTS = "auto" or empty or unclearUse TodoWrite to create tracking for optimization phases:
[
{
"content": "Detect optimization target",
"status": "in_progress",
"activeForm": "Detecting optimization target"
},
{
"content": "Profile current performance (baseline)",
"status": "pending",
"activeForm": "Profiling current performance"
},
{
"content": "Create optimization plan",
"status": "pending",
"activeForm": "Creating optimization plan"
},
{
"content": "Implement optimizations incrementally",
"status": "pending",
"activeForm": "Implementing optimizations"
},
{
"content": "Verify improvements with benchmarks",
"status": "pending",
"activeForm": "Verifying improvements"
},
{
"content": "Generate performance report",
"status": "pending",
"activeForm": "Generating performance report"
}
]
<!-- Component: prompts/specialist-selection/user-approval.md -->
If target is auto-detected, use AskUserQuestion to confirm:
Question: "Detected optimization target: [TARGET]. Is this correct?"
Options:
- "Yes, proceed with [TARGET] optimization"
- "Change to bundle optimization"
- "Change to database optimization"
- "Change to rendering optimization"
- "Change to API optimization"
- "Run full performance audit (all targets)"
Mark todo #1 as completed.
# Create directory for profiling results
mkdir -p .agency/optimizations
mkdir -p .agency/optimizations/baseline
mkdir -p .agency/optimizations/benchmarks
Mark todo #2 as in_progress.
For Next.js:
# Install bundle analyzer if not present
if ! grep -q "@next/bundle-analyzer" package.json; then
npm install --save-dev @next/bundle-analyzer
fi
# Create next.config.js with analyzer
# Add bundle analyzer configuration
# Run production build with analyzer
ANALYZE=true npm run build
# Output will open in browser - save screenshot or copy stats
For Webpack:
# Install webpack-bundle-analyzer
npm install --save-dev webpack-bundle-analyzer
# Add to webpack config or run
npx webpack-bundle-analyzer dist/stats.json
For Vite:
# Install vite-bundle-visualizer
npm install --save-dev rollup-plugin-visualizer
# Add to vite.config.ts and run build
npm run build
Capture Baseline Metrics:
# Get build output size
ls -lh .next/ | grep -E "(static|chunks)" > .agency/optimizations/baseline/bundle-size.txt
# Or for general build
du -sh dist/ > .agency/optimizations/baseline/bundle-size.txt
For PostgreSQL (Prisma/Drizzle/Raw):
# Enable query logging
# In postgres: ALTER DATABASE yourdb SET log_statement = 'all';
# Run application under typical load
# Use slow query log
# Analyze specific queries
psql -d yourdb -c "EXPLAIN ANALYZE SELECT ..." > .agency/optimizations/baseline/query-plans.txt
# Check for missing indexes
psql -d yourdb -c "
SELECT schemaname, tablename, attname, n_distinct, correlation
FROM pg_stats
WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
ORDER BY abs(correlation) DESC
LIMIT 20;
" > .agency/optimizations/baseline/index-candidates.txt
For Prisma:
# Enable query logging in Prisma
# Add to schema.prisma: log = ["query", "info", "warn", "error"]
# Or use Prisma Studio
npx prisma studio
Capture Baseline Metrics:
Using Lighthouse (CLI):
# Install Lighthouse CLI
npm install -g lighthouse
# Run Lighthouse on key pages
lighthouse http://localhost:3000 --output json --output-path .agency/optimizations/baseline/lighthouse-home.json
lighthouse http://localhost:3000/dashboard --output json --output-path .agency/optimizations/baseline/lighthouse-dashboard.json
# Extract key metrics
cat .agency/optimizations/baseline/lighthouse-home.json | grep -E "(first-contentful-paint|largest-contentful-paint|cumulative-layout-shift|total-blocking-time)"
Using React DevTools Profiler:
.agency/optimizations/baseline/react-profiler.jsonCapture Baseline Metrics:
Using autocannon (HTTP benchmarking):
# Install autocannon
npm install -g autocannon
# Benchmark key endpoints
autocannon -c 10 -d 30 http://localhost:3000/api/users > .agency/optimizations/baseline/api-users.txt
autocannon -c 10 -d 30 http://localhost:3000/api/posts > .agency/optimizations/baseline/api-posts.txt
# Benchmark with POST requests
autocannon -c 10 -d 30 -m POST -H "Content-Type: application/json" -b '{"test": true}' http://localhost:3000/api/create
Using k6 (load testing):
# Install k6
# Create k6 script
cat > .agency/optimizations/baseline/k6-script.js <<'EOF'
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
vus: 10,
duration: '30s',
};
export default function() {
let res = http.get('http://localhost:3000/api/users');
check(res, { 'status is 200': (r) => r.status === 200 });
sleep(1);
}
EOF
# Run k6
k6 run .agency/optimizations/baseline/k6-script.js > .agency/optimizations/baseline/k6-results.txt
Capture Baseline Metrics:
Using Chrome DevTools (Heap Snapshot):
.agency/optimizations/baseline/heap-snapshot.heapsnapshotUsing Node.js --inspect:
# Start app with inspector
node --inspect server.js
# Or for Next.js
NODE_OPTIONS='--inspect' npm run dev
# Connect Chrome DevTools to node inspector
# Take heap snapshots
Using clinic.js (Node.js profiling):
# Install clinic
npm install -g clinic
# Run clinic doctor (diagnoses performance issues)
clinic doctor -- node server.js
# Run clinic bubbleprof (async operations)
clinic bubbleprof -- node server.js
# Run clinic flame (flamegraph)
clinic flame -- node server.js
Capture Baseline Metrics:
Create comprehensive baseline report:
# Performance Baseline Report
**Date**: [current date]
**Target**: [optimization target]
**Framework**: [detected framework]
## Baseline Metrics
### [Target-Specific Metrics]
[Include specific metrics from profiling above]
## Bottlenecks Identified
1. **[Bottleneck 1]**: [Description]
- Impact: [High/Medium/Low]
- Metric: [specific measurement]
- Location: [file/query/component]
2. **[Bottleneck 2]**: [Description]
- Impact: [High/Medium/Low]
- Metric: [specific measurement]
- Location: [file/query/component]
## Optimization Opportunities
1. [Opportunity 1] - Expected improvement: [X%]
2. [Opportunity 2] - Expected improvement: [X%]
3. [Opportunity 3] - Expected improvement: [X%]
## Profiling Artifacts
- [List files in .agency/optimizations/baseline/]
---
**Next Step**: Create optimization plan prioritized by impact
Save to .agency/optimizations/baseline-report.md
Mark todo #2 as completed.
Mark todo #3 as in_progress.
Based on target, spawn the appropriate specialist to review baseline and create plan:
| Target | Specialist Agent |
|---|---|
| Bundle | Frontend Developer or senior-developer |
| Database | Backend Architect |
| Rendering | Frontend Developer or ArchitectUX |
| API | Backend Architect or performance-benchmarker |
| Memory | Senior Developer or backend-architect |
| Auto/Multiple | Performance Benchmarker |
Use the Task tool to spawn the specialist:
Task: Create optimization plan for [TARGET]
Agent: [selected specialist]
Context:
- Optimization target: [TARGET]
- Framework: [detected framework]
- Baseline report: [read .agency/optimizations/baseline-report.md]
- Key bottlenecks: [summarize top 3]
Instructions:
1. Review the baseline profiling report
2. Prioritize optimizations by:
- Impact (performance improvement)
- Effort (implementation complexity)
- Risk (potential for breaking changes)
3. Create incremental optimization plan with:
- Each optimization as a separate step
- Expected improvement (quantitative)
- Implementation approach
- Rollback strategy if no improvement
4. Include specific techniques:
- For bundle: Code splitting, tree shaking, lazy loading, compression
- For database: Indexes, query optimization, connection pooling, caching
- For rendering: React.memo, useMemo, virtual scrolling, SSR optimization
- For API: Caching, batching, parallel requests, response compression
- For memory: Leak fixes, GC tuning, object pooling
Please create a detailed, prioritized optimization plan.
Wait for the specialist to complete the plan.
Review the specialist's plan and ensure it includes:
Use AskUserQuestion or present clearly:
## Optimization Plan for [TARGET]
### Summary
[Brief overview of optimization strategy]
### Optimizations (Prioritized by Impact)
#### High Impact
1. **[Optimization 1]** - Expected: [X% improvement]
- Approach: [brief description]
- Effort: [Low/Medium/High]
- Risk: [Low/Medium/High]
2. **[Optimization 2]** - Expected: [X% improvement]
- Approach: [brief description]
- Effort: [Low/Medium/High]
- Risk: [Low/Medium/High]
#### Medium Impact
3. **[Optimization 3]** - Expected: [X% improvement]
- Approach: [brief description]
### Execution Strategy
- Execute optimizations incrementally
- Benchmark after each optimization
- Rollback if no improvement or tests fail
- Estimated total time: [X hours]
**Ready to proceed with optimizations?**
If user asks for modifications, update the plan.
STOP HERE until user approves the plan.
Mark todo #3 as completed.
Mark todo #4 as in_progress.
# Create git checkpoint before optimizations
git add -A
git commit -m "chore: baseline before performance optimizations for $ARGUMENTS"
# Create optimization tracking file
cat > .agency/optimizations/execution-log.md <<EOF
# Optimization Execution Log
**Target**: $ARGUMENTS
**Start Time**: $(date)
## Optimizations
EOF
For each optimization in the plan (in priority order):
CURRENT_OPT="[optimization name]"
echo "## Optimization: $CURRENT_OPT" >> .agency/optimizations/execution-log.md
echo "**Started**: $(date)" >> .agency/optimizations/execution-log.md
# Git checkpoint
git add -A
git commit -m "chore: checkpoint before $CURRENT_OPT"
Option A: Direct Implementation (for simple optimizations):
# Use Edit tool to make changes
# Example: Add React.memo to component
Option B: Delegate to Specialist (for complex optimizations):
Task: Implement optimization - [optimization name]
Agent: [appropriate specialist]
Context:
- Optimization target: [TARGET]
- Specific optimization: [name and description]
- Expected improvement: [X%]
- Files to modify: [list]
Instructions:
1. Implement the optimization as described
2. Maintain existing functionality (all tests must pass)
3. Add comments explaining the optimization
4. DO NOT commit changes - I will handle benchmarking
Please implement this optimization now.
echo "### Benchmark" >> .agency/optimizations/execution-log.md
# Run appropriate benchmark based on target
# For bundle:
npm run build
du -sh .next/ > .agency/optimizations/benchmarks/$CURRENT_OPT-bundle.txt
# For database:
# Run same queries as baseline
psql -d yourdb -c "EXPLAIN ANALYZE [query]" > .agency/optimizations/benchmarks/$CURRENT_OPT-query.txt
# For rendering:
lighthouse http://localhost:3000 --output json --output-path .agency/optimizations/benchmarks/$CURRENT_OPT-lighthouse.json
# For API:
autocannon -c 10 -d 30 http://localhost:3000/api/endpoint > .agency/optimizations/benchmarks/$CURRENT_OPT-api.txt
# For memory:
# Take heap snapshot after same operations
Compare benchmark results to baseline:
# Example for bundle size
BASELINE_SIZE=$(cat .agency/optimizations/baseline/bundle-size.txt | awk '{print $1}')
CURRENT_SIZE=$(cat .agency/optimizations/benchmarks/$CURRENT_OPT-bundle.txt | awk '{print $1}')
# Calculate improvement percentage
# (For demonstration - actual calculation would be more robust)
echo "Baseline: $BASELINE_SIZE" >> .agency/optimizations/execution-log.md
echo "Current: $CURRENT_SIZE" >> .agency/optimizations/execution-log.md
Determine if improvement meets expectations:
# Run test suite to ensure no functionality broken
npm test
if [ $? -ne 0 ]; then
echo "❌ Tests failed after $CURRENT_OPT" >> .agency/optimizations/execution-log.md
# Rollback
git reset --hard HEAD~1
echo "Rolled back $CURRENT_OPT due to test failures" >> .agency/optimizations/execution-log.md
# Ask user what to do
Use AskUserQuestion:
Question: "Optimization '$CURRENT_OPT' caused test failures. How to proceed?"
Options:
- "Skip this optimization and continue"
- "Attempt to fix the issues"
- "Abort optimization workflow"
fi
<!-- Component: prompts/git/commit-formatting.md -->
If improvement is acceptable and tests pass:
Use conventional commit format with the perf type for performance improvements:
git add -A
git commit -m "$(cat <<'EOF'
perf($ARGUMENTS): $CURRENT_OPT
Improved [metric] by [X%]
Baseline: [baseline value]
Current: [current value]
Performance optimization as part of incremental improvement workflow.
EOF
)"
echo "✅ Committed: $CURRENT_OPT" >> .agency/optimizations/execution-log.md
echo "Improvement: [X%]" >> .agency/optimizations/execution-log.md
Refer to commit formatting component for detailed commit message standards and best practices.
If no improvement or regression:
git reset --hard HEAD~1
echo "⏪ Rolled back: $CURRENT_OPT (no improvement)" >> .agency/optimizations/execution-log.md
Continue executing optimizations incrementally, benchmarking after each, until all high and medium priority optimizations are attempted.
As optimizations complete, update TodoWrite with progress:
Currently implementing optimization 3 of 7: [optimization name]
Mark todo #4 as completed when all optimizations attempted.
Mark todo #5 as in_progress.
<!-- Component: prompts/quality-gates/quality-gate-sequence.md -->Execute quality gates in order to ensure optimizations haven't broken functionality:
Refer to quality gate sequence component for detailed execution instructions and failure handling.
Run complete benchmark suite to capture final state:
# Bundle
npm run build
du -sh .next/ > .agency/optimizations/final-bundle.txt
# Lighthouse (if rendering optimization)
lighthouse http://localhost:3000 --output json --output-path .agency/optimizations/final-lighthouse.json
# API (if API optimization)
autocannon -c 10 -d 30 http://localhost:3000/api/endpoint > .agency/optimizations/final-api.txt
# Database (if database optimization)
# Re-run slow queries and compare
<!-- Component: prompts/reporting/metrics-comparison.md -->
Create metrics comparison showing improvements using the standard metrics comparison template.
Generate a comparison table showing:
Key Metrics by Target:
Refer to metrics comparison component for detailed table formats and calculation formulas.
Check for unintended negative impacts:
# Build time (should not significantly increase)
# Bundle size for other chunks (shouldn't grow)
# Test execution time (shouldn't slow down)
# Development server startup (shouldn't slow down)
If regressions found, decide if trade-off is acceptable or if rollback needed.
Run benchmarks multiple times (3 runs) to ensure consistency:
# Run benchmark 3 times
for i in 1 2 3; do
echo "Run $i:"
[benchmark command] > .agency/optimizations/consistency-run-$i.txt
done
# Verify results are within 5% variance
Mark todo #5 as completed.
Mark todo #6 as in_progress.
<!-- Component: prompts/reporting/summary-template.md --> <!-- Component: prompts/reporting/metrics-comparison.md -->Create a detailed performance optimization report using an adapted version of the implementation summary template.
Report Structure:
# Performance Optimization Report: [Target]
**Date**: [YYYY-MM-DD HH:MM:SS]
**Target**: [optimization target]
**Framework**: [detected framework]
**Duration**: [total time spent]
**Status**: ✅ SUCCESS / ⚠️ PARTIAL / ❌ FAILED
Brief overview (2-3 sentences) of what was optimized and overall results.
Key Results (top 3 metrics improved)
Optimizations: [N successful] / [M attempted]
For each optimization:
Successfully Implemented ✅
Attempted but Rolled Back ⏪
Use metrics comparison component to generate comparison tables showing:
Immediate Opportunities (not implemented):
Long-Term Recommendations:
Monitoring Recommendations:
List all files saved to .agency/optimizations/:
Summary of success, key learnings, and next steps.
Save report to: .agency/optimizations/optimization-report-[YYYYMMDD-HHMMSS].md
Refer to summary template component for detailed section formats and best practices.
Mark todo #6 as completed.
Profiling Tool Not Available:
npm install -g lighthouse, etc.)Application Won't Start:
Baseline Capture Fails:
Implementation Error:
Benchmark Failure:
No Improvement:
Tests Fail:
Regression Detected:
Change Target Mid-Optimization:
Abort Optimization:
After optimization, consider setting performance budgets:
Bundle Size:
Web Vitals:
API Performance:
Database:
Bundle Optimization:
Database Optimization:
Rendering Optimization:
API Optimization:
Memory Optimization:
Activate and reference these skills as needed:
Required:
agency-workflow-patterns - Orchestration patterns (ACTIVATE IMMEDIATELY)Technology-Specific (activate based on project):
nextjs-16-expert - Next.js optimization techniquesreact-performance - React rendering optimizationdatabase-optimization - SQL query and index optimizationwebpack-optimization - Bundle optimization strategiesSpecialist Skills:
web-performance - Web vitals and Lighthouse optimizationnode-performance - Node.js profiling and optimizationapi-performance - REST/GraphQL API optimization/agency:work [issue] - Implement issue (may include performance improvements)/agency:review [pr] - Review PR (can check for performance issues)/agency:test [component] - Generate tests (including performance tests)/agency:refactor [scope] - Refactor code (often improves performance as side effect)Remember:
Performance optimization is a scientific process: hypothesize, experiment, measure, conclude.
End of /agency:optimize command