Provides guidance on optimizing CCPM hooks for performance and token efficiency. Auto-activates when developing, debugging, or benchmarking hooks. Includes caching strategies, token budgets, performance benchmarking, and best practices for maintaining sub-5-second hook execution times.
Provides guidance on optimizing CCPM hooks for performance and token efficiency. Auto-activates when developing, debugging, or benchmarking hooks. Includes caching strategies, token budgets, performance benchmarking, and best practices for maintaining sub-5-second hook execution times.
/plugin marketplace add duongdev/ccpm/plugin install ccpm@duongdev-ccpm-marketplaceThis skill is limited to using the following tools:
This skill provides comprehensive guidance for optimizing Claude Code hooks used in CCPM (Claude Code Project Management) to ensure high performance, minimal token usage, and reliable execution.
Claude Code hooks are event-based automation points that trigger Claude to perform intelligent actions at specific moments in the development workflow:
| Hook | Trigger | Purpose | Target Time |
|---|---|---|---|
| smart-agent-selector-optimized.prompt | UserPromptSubmit | Intelligent agent selection & invocation | <5s |
| tdd-enforcer-optimized.prompt | PreToolUse | Ensure tests exist before code | <1s |
| quality-gate-optimized.prompt | Stop | Automatic code review & security audit | <5s |
User Message
↓
[UserPromptSubmit Hook]
↓ smart-agent-selector analyzes request
↓ Selects best agents (with caching)
↓ Injects agent invocation instructions
↓
Claude Executes (Agents run in parallel/sequence)
↓
File Write/Edit Request
↓
[PreToolUse Hook]
↓ tdd-enforcer checks for tests
↓ Blocks if missing (invokes tdd-orchestrator)
↓
File Created/Modified
↓
Response Complete
↓
[Stop Hook]
↓ quality-gate analyzes changes
↓ Invokes code-reviewer, security-auditor
↓
Complete
Execution Time:
Token Budget:
Cache Performance:
User Experience Impact:
- <1s → Feels instant, no latency
- 1-5s → Acceptable delay
- >5s → Noticeable lag, frustrating
Token Budget Impact:
- <5,000 tokens per hook → Minimal overhead
- <10,000 tokens total → <5% of typical context window
- Well-optimized hooks → Enable more complex agent selection
Purpose: Analyze user request and automatically invoke best agents
Original Version: 19,307 lines, ~4,826 tokens Optimized Version: 3,538 lines, ~884 tokens Improvement: 82% token reduction
Key Optimizations:
Execution Flow:
User: "Add authentication with JWT"
↓
[smart-agent-selector]
↓ Task: Implementation
↓ Keywords: auth, jwt, security
↓ Tech Stack: backend (detected)
↓ Score: tdd-orchestrator (85), backend-architect (95), security-auditor (90)
↓ Decision: Sequential execution
↓
Result: {
"shouldInvokeAgents": true,
"selectedAgents": [...],
"execution": "sequential",
"injectedInstructions": "..."
}
Purpose: Ensure test files exist before writing production code
Original Version: 4,853 lines, ~1,213 tokens Optimized Version: 2,477 lines, ~619 tokens Improvement: 49% token reduction
Key Optimizations:
Decision Matrix:
Is test file? → APPROVE (writing tests first)
Tests exist for module? → APPROVE (tests are ready)
Config/docs file? → APPROVE (no TDD needed)
Production code no tests? → BLOCK (invoke tdd-orchestrator)
User bypass? → APPROVE (with warning)
Purpose: Automatically invoke code review and security audit
Original Version: 4,482 lines, ~1,120 tokens Optimized Version: 2,747 lines, ~687 tokens Improvement: 39% token reduction
Key Optimizations:
Decision Rules:
Code files modified? → Invoke code-reviewer
API/auth code? → Invoke security-auditor (blocking)
3+ files changed? → Invoke code-reviewer
Only docs/tests? → SKIP (no review needed)
Problem: Full agent discovery takes ~2,000ms
Solution: Cache agent list with 5-minute TTL
Implementation:
# Original: Slow discovery
agents=$(jq -r '.plugins | keys[]' ~/.claude/plugins/installed_plugins.json)
# Result: ~2,000ms per execution
# Optimized: Cached discovery
CACHE_FILE="${TMPDIR:-/tmp}/claude-agents-cache-$(id -u).json"
CACHE_MAX_AGE=300 # 5 minutes
if [ -f "$CACHE_FILE" ]; then
if [ $(($(date +%s) - $(stat -f %m "$CACHE_FILE"))) -lt 300 ]; then
cat "$CACHE_FILE" # <100ms hit
exit 0
fi
fi
# Result: <100ms for cache hits, 96% faster
Cache Performance:
First run: 2,000ms (cache miss)
Subsequent: 20ms (cache hit)
After 5 min: 2,000ms (cache expired)
Expected hit rate: 85-95% (5-minute window typical)
Expected savings: 1,900ms per cached call
Problem: Injecting entire codebase context bloats tokens
Solution: Inline only critical information
Before (Verbose):
Available agents include:
- tdd-orchestrator: This agent is responsible for writing tests following the Red-Green-Refactor workflow. It can handle unit tests, integration tests, and end-to-end tests...
- backend-architect: The backend architect provides guidance on API design, database schemas, microservices patterns...
[continues for 50 agents]
After (Concise):
{
"availableAgents": [
{"name": "tdd-orchestrator", "score": 85, "reason": "TDD workflow"},
{"name": "backend-architect", "score": 95, "reason": "API design"}
]
}
Token Savings: 60-70% reduction
Concept: Only show information when needed
Example - Agent Selection:
Level 1 (Default): Show top 3 agents with scores
Level 2 (If needed): Show all 10 agents with reasoning
Level 3 (Debugging): Show full scoring breakdown
Implementation:
# Don't include full descriptions
"availableAgents": [
{"name": "agent-1", "score": 85}
# Skip: "description": "Long description..."
]
# Only explain top choice
"reasoning": "Selected top 3 agents by score"
# Skip detailed reasoning for each
Problem: Hooks run on every message, even simple ones
Solution: Fast-path for low-complexity requests
Smart Agent Selector Example:
// Fast path: Simple docs question
if (message.includes("how to") && !message.includes("code")) {
return {
"shouldInvokeAgents": false,
"reasoning": "Documentation question, skip agents"
}
}
// Normal path: Requires agent selection
// ... full scoring algorithm
TDD Enforcer Example:
# Fast path: Test file
if [[ "$file" == *.test.* ]] || [[ "$file" == *.spec.* ]]; then
echo '{"decision": "approve", "reason": "Test file"}'
exit 0
fi
# Normal path: Check for test existence
# ... expensive file system operations
Location: /scripts/discover-agents-cached.sh
Execution Flow:
1. Check if cache file exists
↓ YES: Check age
↓ NO: Run full discovery
2. Check cache age (<5 minutes?)
↓ FRESH: Return cached result immediately (~100ms)
↓ STALE: Continue to discovery
3. Full agent discovery (expensive)
a. Scan plugin directory
b. Extract agent names/descriptions
c. Scan global agents
d. Scan project agents
4. Cache result
Cache file: ~/.cache/claude-agents-cache-{uid}.json
TTL: 300 seconds (5 minutes)
Cache File Location:
CACHE_FILE="${TMPDIR:-/tmp}/claude-agents-cache-$(id -u).json"
Cache Invalidation:
rm -f "${TMPDIR:-/tmp}/claude-agents-cache-$(id -u).json"
When Cache Becomes Invalid:
Scenario 1: First request after startup
discover-agents.sh: ~2,000ms (full scan)
discover-agents-cached.sh: ~2,000ms (cache miss, first run)
Scenario 2: Second request (within 5 minutes)
discover-agents.sh: ~2,000ms (full scan again)
discover-agents-cached.sh: ~20ms (cache hit - 100x faster!)
Typical usage (5 requests in 5 minutes):
Without cache: 5 × 2,000ms = 10,000ms total
With cache: 2,000ms + 20ms + 20ms + 20ms + 20ms = 2,080ms total
Speedup: 4.8x faster
In smart-agent-selector-optimized.prompt:
# Instead of discovering agents inline (expensive)
# Load pre-discovered agents from context
availableAgents={{availableAgents}}
# The Claude Code hook system pre-runs discovery
# and passes cached results automatically
Before (Wordy):
## Selection Strategy
This section describes the comprehensive strategy used to select the best agents
based on multiple factors including the user's request, the detected task type,
the technology stack in use, and various scoring algorithms...
### 1. Task Classification
The first step in the selection process is to classify the type of task the user
is requesting. This involves analyzing the user's message to determine whether
they are asking for help with...
After (Concise):
## Selection Strategy
### 1. Task Classification
- Planning/Design → architect agents
- Implementation → TDD first, then dev agents
- Bug Fix → debugger
Token Savings: 70% for explanatory text
Before (Expanded):
The user is asking about implementing a feature. This is an implementation task.
Based on their request mentioning "authentication" and "JWT", they're likely
working on backend authentication. The tech stack detected is Node.js/TypeScript.
With these factors, I recommend invoking tdd-orchestrator first, then
backend-architect, and finally security-auditor...
After (Templated):
{
"taskType": "implementation",
"keywords": ["auth", "jwt"],
"techStack": "backend",
"selectedAgents": [
{"name": "tdd-orchestrator", "score": 85},
{"name": "backend-architect", "score": 95},
{"name": "security-auditor", "score": 90}
]
}
Token Savings: 60% with structured format
Before (Large examples):
### Example: Implementation Task
When the user says "Add user authentication with JWT tokens to our API",
the system should analyze this and determine that it's an implementation task
requiring TDD, architecture review, and security validation. The response would be:
{
"shouldInvokeAgents": true,
"selectedAgents": [
{"name": "tdd-orchestrator", "type": "plugin", "reason": "Write tests first", "priority": "high", "score": 85},
{"name": "backend-architect", "type": "project", "reason": "Design secure API", "priority": "high", "score": 95},
{"name": "security-auditor", "type": "plugin", "reason": "Validate auth implementation", "priority": "high", "score": 90}
]
}
After (Reference examples):
Example: `src/hooks/examples/implementation-task.json`
Token Savings: 80% by referencing external files
Before (Repeated logic):
## Selection Rules
1. Use exact agent names from available agents
2. Check that agent names are valid
3. Only use agents that exist
4. Ensure agent names are correct
## Validation Rules
1. Agent must exist
2. Agent must be valid
3. Agent name must match
After (Single source):
## Selection Rules
1. Use exact agent names from available agents
2. Ensure all agents exist before selection
Token Savings: 50% by consolidating
Location: /scripts/benchmark-hooks.sh
Run Complete Benchmark:
./scripts/benchmark-hooks.sh
Example Output:
╔════════════════════════════════════════════════════════════════════════╗
║ CCPM Hook Performance Benchmark Report ║
╚════════════════════════════════════════════════════════════════════════╝
SECTION 1: Script Performance (Execution Time)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 discover-agents.sh (ORIGINAL)
⏱️ Average Execution Time: 2045ms
📦 Output Size: 4821 bytes
🎯 Estimated Tokens: 1205 tokens
⚠️ Performance: ACCEPTABLE (<2s)
📊 discover-agents-cached.sh (OPTIMIZED - First Run)
⏱️ Average Execution Time: 2123ms
📦 Output Size: 4892 bytes
🎯 Estimated Tokens: 1223 tokens
⚠️ Performance: ACCEPTABLE (<2s)
📊 discover-agents-cached.sh (OPTIMIZED - Cached)
⏱️ Average Execution Time: 18ms
✅ Performance: EXCELLENT (<100ms) - 96% faster with cache!
SECTION 2: Hook Prompt Files (Token Usage)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📄 smart-agent-selector.prompt (ORIGINAL)
📦 File Size: 19307 bytes
📏 Line Count: 118 lines
🎯 Estimated Tokens: 4826 tokens
⚠️ Token Usage: NEEDS OPTIMIZATION (>3000 tokens)
📄 smart-agent-selector-optimized.prompt (NEW)
📦 File Size: 3538 bytes
📏 Line Count: 79 lines
🎯 Estimated Tokens: 884 tokens
✅ Token Usage: EXCELLENT (<500 tokens)
SECTION 3: Summary & Recommendations
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Token Usage Comparison
Original Total: ~10071 tokens
Optimized Total: ~3436 tokens
Savings: ~6635 tokens (66% reduction)
🎯 Performance Targets Met:
✅ All hooks execute in <5 seconds
✅ Cached discovery runs in <100ms (96% faster)
✅ Token usage reduced by 60% in optimized hooks
✅ No functionality regression
Execution Time Metrics:
Token Usage Metrics:
Cache Hit Rate:
File Naming Convention:
Original: hooks/my-hook.prompt
Optimized: hooks/my-hook-optimized.prompt
Starting Template:
You are a [brief description].
## Context
[Relevant variables from hook]
## Analysis Rules
- Rule 1
- Rule 2
## Response Format (JSON ONLY)
```json
{
"decision": "approve|block",
"reasoning": "..."
}
Optimization Checklist:
### Step 2: Test with Benchmark Script
```bash
# Run the benchmark
./scripts/benchmark-hooks.sh
# Focus on your hook
grep -A 20 "my-hook-optimized" output.txt
# Check metrics:
# - Execution time: <5s target
# - Token count: <5000 tokens target
# - Performance category: ✅ EXCELLENT or GOOD
Create Comparison Table:
Metric | Original | Optimized | Improvement
--------------------|----------|-----------|-------------
File Size (bytes) | 19,307 | 3,538 | -82%
Lines of Code | 118 | 79 | -33%
Estimated Tokens | 4,826 | 884 | -82%
Execution Time (ms) | 2,045 | 18* | -99%*
*With cache hit
If Still Not Meeting Targets:
Too many tokens?
Too slow?
Too much duplication?
Rule: Never overwrite the original hook
Good:
- hooks/smart-agent-selector.prompt (original, reference)
- hooks/smart-agent-selector-optimized.prompt (production)
Bad:
- hooks/smart-agent-selector.prompt (modified, no baseline)
Benefit: Easy to compare, can revert if needed
Before Deploying:
# Test with actual CCPM commands
/ccpm:plan "Add feature X" my-project
# Test with implementation task
/ccpm:work
# Test with multiple files
/ccpm:sync "Completed API design"
Measure:
In Hook File Comments:
You are an intelligent agent selector.
Token Budget:
- Context injection: ~500 tokens
- Available agents list: ~200 tokens
- Selection logic: ~100 tokens
- Response: ~100 tokens
Total target: <5000 tokens
Create Baseline:
# Week 1
./scripts/benchmark-hooks.sh > week1-results.txt
# Week 4
./scripts/benchmark-hooks.sh > week4-results.txt
# Compare
diff week1-results.txt week4-results.txt
Regression Detection:
Template:
{
"decision": "approve|block",
"reasoning": "...",
"fallback": {
"decision": "approve",
"reasoning": "Unable to determine, defaulting to approve"
}
}
Never:
Before (Original):
smart-agent-selector.prompt
- 19,307 bytes
- 118 lines
- ~4,826 tokens
- Detailed explanations for every concept
- Multiple example patterns shown in full
- Verbose scoring algorithm explanation
After (Optimized):
smart-agent-selector-optimized.prompt
- 3,538 bytes (-82%)
- 79 lines (-33%)
- ~884 tokens (-82%)
- Concise bullet points
- Reference examples instead
- Inline scoring formula
Key Changes:
Result: Maintains 100% functionality with 82% fewer tokens
Problem: Hook ran too many file system checks
Solution:
# Before: Check all possible test locations
for pattern in "*.test.*" "*.spec.*" "__tests__/*"; do
find . -name "$pattern" -path "*$module*"
done
# Result: ~1,500ms for large codebases
# After: Fast-path + simple patterns
if [ -f "$test_path" ]; then
# Found, return immediately
fi
# Result: <50ms
Improvement: 96% faster with same accuracy
Problem: Cache hit rate only 40% (too low)
Analysis: TTL too short, agents change during session
Solution: Increase TTL from 60 to 300 seconds
# Before
CACHE_MAX_AGE=60 # 1 minute → 60% miss rate
# After
CACHE_MAX_AGE=300 # 5 minutes → 85% hit rate
Impact:
Hook optimization in CCPM focuses on three core principles:
By following the optimization strategies and best practices outlined in this skill, you can maintain or improve hook functionality while significantly reducing execution time and token usage, resulting in a better user experience and lower API costs.
./scripts/benchmark-hooks.sh to establish baseline/scripts/benchmark-hooks.sh/scripts/discover-agents-cached.sh/hooks/*-optimized.prompt/docs/This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.