From sdlc
Invoke for broad codebase performance investigations: slow apps or APIs, endpoint latency problems, N+1 queries, algorithmic bottlenecks, missing caching, memory inefficiencies, and framework antipatterns. Triggers on "audit the performance", "find bottlenecks", "why are our endpoints slow", "find N+1 queries", or "check for performance antipatterns". Scans hot paths across any language or framework, produces a prioritized findings report with impact estimates, and walks through fixes interactively. Do NOT use for single targeted fixes, test speed optimization, infrastructure scaling, or general code review.
npx claudepluginhub jerrod/agent-plugins --plugin sdlcThis skill is limited to using the following tools:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Log skill invocation:
Use $PLUGIN_DIR (detected in Step 1 via find . -name "run-gates.sh"):
bash "$PLUGIN_DIR/../scripts/audit-trail.sh" log review sdlc:performance-audit started --context "$ARGUMENTS"bash "$PLUGIN_DIR/../scripts/audit-trail.sh" log review sdlc:performance-audit completed --context="<summary>"Find real performance problems, not style preferences. Every finding must describe a concrete performance impact (CPU, memory, I/O, latency) with a specific fix. "Could be slow" is not a finding — "O(n²) nested loop over user list, will degrade at ~1000 users" is.
Detect project languages, frameworks, and structure:
echo "=== Project Detection ==="
[ -f "pyproject.toml" ] && echo "Python project"
[ -f "package.json" ] && echo "Node/JS/TS project"
[ -f "Gemfile" ] && echo "Ruby project"
[ -f "go.mod" ] && echo "Go project"
[ -f "Cargo.toml" ] && echo "Rust project"
[ -f "build.gradle" ] || [ -f "pom.xml" ] && echo "Java/Kotlin project"
Use Glob to identify hot paths — files most likely to have performance impact:
Glob: **/routes/**
Glob: **/controllers/**
Glob: **/api/**
Glob: **/models/**
Glob: **/services/**
Glob: **/queries/**
Glob: **/middleware/**
Build a prioritized file list: data layer first, then services, then controllers, then utilities.
Run the performance gate script to catch Tier 1 issues:
PLUGIN_DIR=$(find . -name "run-gates.sh" -path "*/sdlc/*" -exec dirname {} \; 2>/dev/null | head -1)
if [ -z "$PLUGIN_DIR" ]; then
PLUGIN_DIR=$(find "$HOME/.claude" -name "run-gates.sh" -path "*/sdlc/*" -exec dirname {} \; 2>/dev/null | sort -V | tail -1)
fi
bash "$PLUGIN_DIR/gate-performance.sh"
Read the proof file and use gate findings as the initial findings list:
cat .quality/proof/performance.json
Read files in priority order from Step 1. For each file, check all applicable categories:
Write findings to .quality/performance-audit-ledger.md as you go. Format:
## Finding: [category] — [file:line]
**Severity:** critical|high|medium|advisory
**Impact:** [concrete description of performance impact]
**Current code:** [snippet]
**Suggested fix:** [snippet or description]
After scanning all priority files, write PERFORMANCE-AUDIT.md to the project root:
Walk through findings one at a time with the user:
perf: <description>bash "$PLUGIN_DIR/gate-performance.sh"
cat .quality/proof/performance.json