Use for 3+ independent failures - dispatches parallel subagents with Shannon wave coordination, success scoring (0.00-1.00) per domain, and MCP result aggregation
Dispatches parallel subagents for 3+ independent failures with wave coordination and success scoring (0.00-1.00) per domain. Use when multiple subsystems have unrelated failures that can be fixed concurrently without shared context.
/plugin marketplace add krzemienski/shannon-framework/plugin install shannon@shannon-frameworkThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Parallel investigation of independent failures with quantitative success tracking.
Dispatch one agent per independent problem domain. Shannon enhancement adds wave-based coordination, numerical success scoring, and MCP result aggregation.
Success Scoring (0.00-1.00):
3+ independent failures?
├─ Yes, independent domains?
│ ├─ Yes → Parallel dispatch (optimal)
│ └─ No → Sequential investigation
└─ No → Single agent focus
Dispatch when:
Don't dispatch:
Group failures by subsystem:
Domain A: Tool approval flow (file_a_test)
Domain B: Batch completion (file_b_test)
Domain C: Abort functionality (file_c_test)
Independence check: Fixing Domain A doesn't touch Domain B/C code paths.
# Per-domain scoring (Serena)
per_domain_metrics = {
domain: "Tool approval",
problems_identified: 3,
initial_root_cause_clarity: 0.6, # 0-1.0
estimated_complexity: 0.7, # 0-1.0
}
# Launch agents concurrently with wave tracking
Task("Fix Domain A", wave_id="w1", timeout=30min)
Task("Fix Domain B", wave_id="w1", timeout=30min)
Task("Fix Domain C", wave_id="w1", timeout=30min)
# Shannon wave monitors parallel execution
# MCP tracks: start_time, end_time, status per agent
Result structure (Serena):
parallel_dispatch:
wave_id: "w1"
domains_completed: 3
time_sequential_equivalent: 90min
time_actual_parallel: 35min
efficiency_score: (90/35) = 2.57x faster
per_domain:
- domain: "Tool approval"
agent_success_score: 0.95 # 2/3 fixed, 1 minor issue
root_causes_found: 2
files_modified: 3
- domain: "Batch completion"
agent_success_score: 1.00 # 2/2 fixed perfectly
root_causes_found: 1
files_modified: 2
- domain: "Abort functionality"
agent_success_score: 0.85 # 1/3 fixed, 1 partial
root_causes_found: 1
files_modified: 4
After agents return:
**Domain:** Fix agent-tool-abort.test.ts failures
**Scope:** Only this file and its immediate dependencies
**Success Metric:** Fix all 3 failing tests
Failing tests:
1. "should abort tool with partial output" → expects 'interrupted'
2. "should handle mixed completed/aborted" → timing issue
3. "should properly track pendingToolCount" → gets 0, expects 3
Your task:
1. Identify root causes
2. Fix with minimal code changes
3. Verify all 3 tests pass
Return: Summary of root causes, changes made, final test results
Per-domain score:
agent_success_score = (
(problems_fixed / problems_identified) * 0.6 +
(test_pass_rate) * 0.3 +
(1.0 if no_conflicts else 0.0) * 0.1
)
Range: 0.00-1.00
Overall parallel efficiency:
efficiency_score = sequential_time_cost / actual_parallel_time
Range: 1.0x (no benefit) to Nx (benefit)
❌ Too broad scope (fix everything) ✅ Specific scope (one test file)
❌ No domain metrics ✅ Track agent_success_score per domain
❌ Skip conflict detection ✅ MCP checks file modifications across agents
❌ Sequential dispatch (defeats purpose) ✅ Launch all agents concurrently with wave_id
Track across sessions:
Use historical data to:
With: testing-skills-with-subagents (each agent tests independently) MCP: Tracks file modifications, test results, timing Serena: Metrics for pattern learning
From debugging session (2025):
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.