Optimize subagent definitions for reliable automatic invocation and peak performance
Audit and optimize subagent definitions for reliable automatic invocation and peak performance. Use when a subagent isn't triggering automatically or needs better tool permissions, model selection, or description clarity.
/plugin marketplace add lpasqualis/lpclaude/plugin install lpclaude-config@lpclaude-marketplace<agent-name> [--aggressive]subagents/Audit and optimize the subagent [agent-name] to maximize its effectiveness for automatic invocation. Operate idempotently - if the agent already adheres to best practices, report that it's fully optimized and take no action.
$ARGUMENTS
Before making ANY changes, identify what problem this agent solves:
Search for the agent file using Glob in this order:
.claude/agents/[name].md (project-local)~/.claude/agents/[name].md (global)If found:
[original-path].optimizingwc -l and wc -c.optimizing copy only"[Expert/Specialist] [domain] [purpose]. Invoke this agent to [capabilities]. Use when [trigger conditions] or when [problem indicators]. [MUST BE USED PROACTIVELY if applicable]."
Apply permissive tool selection based on agent type:
Read, LS, Glob, GrepRead, Edit, Write, MultiEdit, LS, GlobRead, LS, Glob, Grep, BashRead, Write, LS, Glob, GrepWebFetch, WebSearch (plus reading tools)Read, Write, Edit, MultiEdit, LS, Glob, GrepCRITICAL: Subagents CANNOT have Task tool (no recursive delegation)
Use simple model names only:
Missing model field inherits from session (acceptable).
If Bash in tools → must be Red (overrides all else)
Otherwise assign semantically based on primary function.
Subagent prompts are SYSTEM IDENTITY DEFINITIONS:
Thresholds:
Simplification Principles:
Subagent Limitations (fix if found):
Invalid Patterns to Fix:
After optimizing the working copy, run THREE identical parallel verification tasks to ensure no functionality was lost. The verifiers will compare the original with the optimized copy.
[path]/[name].md[path]/[name].md.optimizing# Set the file path for all verifiers
FILE_PATH="[full path to the agent file being optimized]"
# Run THREE identical verifiers in parallel using Task tool
# Each verifier gets the EXACT SAME prompt:
prompt = """
You are an independent verification specialist. Compare the optimized copy with the original to ensure the optimization is safe and complete.
Files to compare:
- Original: $ORIGINAL_PATH
- Optimized: $OPTIMIZED_PATH
COMPREHENSIVE ANALYSIS CHECKLIST:
1. Read both versions:
- Original: Read($ORIGINAL_PATH)
- Optimized: Read($OPTIMIZED_PATH)
2. Functional Completeness:
- Core agent purpose preserved
- Tool capabilities complete (no missing tools)
- WebFetch/WebSearch operations preserved
- Auto-invocation triggers intact
- Model selection rationale preserved
- Behavioral patterns maintained
- Scope definitions preserved
- All "MUST BE USED PROACTIVELY" flags preserved
3. Semantic Integrity:
- Behavioral constraints intact (must/should/never/always)
- Identity definition preserved ("You are..." statements)
- Operational guidelines maintained
- Trigger conditions for auto-invocation unchanged
- Model requirements preserved
- Tool usage patterns intact
- Scope boundaries unchanged
4. Structural Validity:
- YAML syntax valid
- Required fields present (name, description)
- Name in lowercase-hyphenated format
- Description has trigger keywords for auto-invocation
- Tools properly formatted (NO Task tool allowed)
- Model using simple names only (opus/sonnet/haiku)
- Color semantically appropriate
- No invalid patterns (command execution, agent invocation)
- Proper system prompt structure
REPORT FORMAT:
## Verification Report
**Overall Status**: [PASS | FAIL | UNCERTAIN]
**Critical Issues Found** (if any):
- [List each issue that would break functionality]
**Minor Issues Found** (if any):
- [List formatting or style issues that don't affect function]
**Risk Level**: [NONE | LOW | MEDIUM | HIGH]
**Recommendation**:
[APPROVE - All critical functionality preserved]
[REJECT - Specific functionality lost: ...]
[UNCERTAIN - Need clarification on: ...]
"""
# Execute all three verifiers with this identical prompt in parallel
After receiving three reports, analyze for consensus:
iteration = 1
consensus = false
while iteration <= 5 and not consensus:
# Run 3 parallel verifications
results = [verifier1, verifier2, verifier3]
# Check consensus
if all results == PASS:
consensus = true
else:
# Analyze disagreements
issues = extract_common_issues(results)
# Adjust optimization
if critical_issues:
rollback_specific_changes(issues)
else:
refine_optimization(issues)
iteration += 1
if not consensus:
delete_optimized_copy()
report_optimization_failed()
else:
replace_original_with_optimized()
report_success()
mv [path]/[name].md.optimizing [path]/[name].mdrm [path]/[name].md.optimizing.optimizing files remain after completion## Agent [Optimization/Review] Complete ✅
**Agent**: [name]
**Status**: [Changes applied | Already compliant]
**Timestamp**: [date/time]
### Verification Results:
- **Verifier 1**: [PASS/FAIL/UNCERTAIN]
- **Verifier 2**: [PASS/FAIL/UNCERTAIN]
- **Verifier 3**: [PASS/FAIL/UNCERTAIN]
- **Consensus**: [ACHIEVED/FAILED]
- **Iterations used**: [X of 5 maximum]
### Final Disposition:
- **Original file**: [Replaced with optimized version | Preserved unchanged]
- **Working copy**: [Promoted to original | Deleted]
### Size Analysis:
- Line count: [X] lines
- Byte size: [Y] bytes
- Assessment: [Status based on thresholds]
### Changes Applied (if any):
[List specific changes made]
### Compliance Status:
- ✅ Description optimized for auto-invocation
- ✅ Tools verified (no Task tool)
- ✅ Model appropriate
- ✅ Color semantically assigned
- ✅ Size within limits
- ✅ Timestamp updated
- ✅ All verifiers passed