From claude-optimize
Use when running /optimize:discover to cross-reference session usage data against the current environment inventory. Detects gaps (repeated manual work that could be automated), bloat (unused skills, MCP servers, agents), and calculates tooling balance. Do NOT trigger on general coding tasks or other optimize modes.
npx claudepluginhub btcdlabs/btcd-cc-marketplace --plugin claude-optimizeThis skill uses the workspace's default tool permissions.
Cross-references session usage data against the current environment inventory to find gaps (things to add) and bloat (things to remove). This is the analytical core of `/optimize:discover`.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Designs, implements, and audits WCAG 2.2 AA accessible UIs for Web (ARIA/HTML5), iOS (SwiftUI traits), and Android (Compose semantics). Audits code for compliance gaps.
Cross-references session usage data against the current environment inventory to find gaps (things to add) and bloat (things to remove). This is the analytical core of /optimize:discover.
You receive three data sets from the discover command's parallel agents:
Gaps are things the user does manually that could be automated. Cross-reference session data against the inventory:
For each bash command pattern appearing 5+ times across sessions:
Examples:
npm test run 47 times → candidate for a test-runner hook or skillprettier --write run 23 times → candidate for a format-on-save hookdocker compose up run 15 times → candidate for a dev-environment skillUse the workflow bigrams/trigrams section from the session analyzer output to identify repeating sequences. Do NOT manually parse session JSONL files or use grep/awk/for loops.
Read → Edit → Bash(npm test) appearing 8+ times → test-after-edit workflow skillGrep → Read → Read → Edit appearing 10+ times → search-and-fix workflowIf a multi-step sequence repeats and no skill covers it → gap candidate for a new skill or agent.
Cross-reference the detected stack (from codebase-analyzer) against MCP servers in .mcp.json:
Reference: ${CLAUDE_PLUGIN_ROOT}/skills/mcp-advisor/references/mcp-catalog.md
Recurring errors in session data suggest missing prevention hooks:
Cross-reference file edit patterns from the session analyzer output:
Bloat is things that exist but aren't being used. Before running bloat detection, verify the minimum session threshold.
Read the heuristics: ${CLAUDE_PLUGIN_ROOT}/skills/discover-analyzer/references/bloat-heuristics.md
Use the session analyzer script output and the environment inventory to cross-reference usage. Do NOT manually search session JSONL files.
# Get environment inventory
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/env_inventory.py --json
# Get session usage data (already run by session-analyzer skill)
python3 ${CLAUDE_PLUGIN_ROOT}/skills/session-analyzer/scripts/analyze_sessions.py --max-sessions 30
Cross-reference the two outputs:
mcp__<server-name>__* patterns in the MCP tool calls section. Zero calls across 15+ sessions → "potentially unused"ALWAYS use the bundled script for skill redundancy detection. Do NOT manually tokenize descriptions or calculate similarity.
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/skill_analyzer.py --auto-discover --threshold 0.7 --json
The script automatically tokenizes descriptions, removes stop words, calculates Jaccard similarity, and flags pairs with >70% overlap.
CRITICAL: Never flag security-related items as bloat. Read the full exemption list in ${CLAUDE_PLUGIN_ROOT}/skills/discover-analyzer/references/security-constraints.md under "Never Propose Removal Of."
Calculate tooling coverage to help the user understand how much of their tooling is earning its context cost:
tooling_coverage = unique_tools_actually_used_in_sessions / total_tools_available
Where:
mcp__*), and hooks that matched at least once across all analyzed sessionsThis metric is stable regardless of how many sessions are analyzed (unlike a per-session ratio).
| Band | Range | Meaning |
|---|---|---|
| Low coverage | < 0.3 | Most tools unused — likely bloated or recently configured |
| Moderate coverage | 0.3 - 0.6 | Normal — some tools are situational |
| High coverage | 0.6 - 0.9 | Healthy — most tools earning their context cost |
| Full coverage | > 0.9 | Everything active — well-tuned or minimal setup |
Report the band and score, with a directional recommendation:
Return a structured analysis report:
## Discovery Analysis
### Data Summary
- Sessions analyzed: N (date range)
- Session quality: N valid sessions after filtering
- Tooling density: X.X ([BAND])
### Gaps Found (N)
| # | Type | Evidence | Candidate |
|---|------|----------|-----------|
| 1 | Bash command | `npm test` ×47 | New hook or skill |
| 2 | Stack coverage | Postgres detected, no MCP | Install MCP |
| 3 | Error pattern | Build failure ×8 | Prevention hook |
### Bloat Found (N)
| # | Item | Type | Evidence | Confidence |
|---|------|------|----------|------------|
| 1 | unused-skill | Skill | 0 triggers in 25 sessions | Medium |
| 2 | old-mcp | MCP | 0 calls in 30 sessions | High |
### Redundancy Found (N)
| # | Item A | Item B | Overlap |
|---|--------|--------|---------|
| 1 | skill-a | skill-b | 78% description overlap |
### Balance Recommendation
[Direction based on tooling density band]