From shannon
Use for 3+ independent failures - dispatches parallel subagents with Shannon wave coordination, success scoring (0.00-1.00) per domain, and MCP result aggregation
npx claudepluginhub krzemienski/shannon-framework --plugin shannonThis skill uses the workspace's default tool permissions.
**Parallel investigation of independent failures with quantitative success tracking.**
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Parallel investigation of independent failures with quantitative success tracking.
Dispatch one agent per independent problem domain. Shannon enhancement adds wave-based coordination, numerical success scoring, and MCP result aggregation.
Success Scoring (0.00-1.00):
3+ independent failures?
├─ Yes, independent domains?
│ ├─ Yes → Parallel dispatch (optimal)
│ └─ No → Sequential investigation
└─ No → Single agent focus
Dispatch when:
Don't dispatch:
Group failures by subsystem:
Domain A: Tool approval flow (file_a_test)
Domain B: Batch completion (file_b_test)
Domain C: Abort functionality (file_c_test)
Independence check: Fixing Domain A doesn't touch Domain B/C code paths.
# Per-domain scoring (Serena)
per_domain_metrics = {
domain: "Tool approval",
problems_identified: 3,
initial_root_cause_clarity: 0.6, # 0-1.0
estimated_complexity: 0.7, # 0-1.0
}
# Launch agents concurrently with wave tracking
Task("Fix Domain A", wave_id="w1", timeout=30min)
Task("Fix Domain B", wave_id="w1", timeout=30min)
Task("Fix Domain C", wave_id="w1", timeout=30min)
# Shannon wave monitors parallel execution
# MCP tracks: start_time, end_time, status per agent
Result structure (Serena):
parallel_dispatch:
wave_id: "w1"
domains_completed: 3
time_sequential_equivalent: 90min
time_actual_parallel: 35min
efficiency_score: (90/35) = 2.57x faster
per_domain:
- domain: "Tool approval"
agent_success_score: 0.95 # 2/3 fixed, 1 minor issue
root_causes_found: 2
files_modified: 3
- domain: "Batch completion"
agent_success_score: 1.00 # 2/2 fixed perfectly
root_causes_found: 1
files_modified: 2
- domain: "Abort functionality"
agent_success_score: 0.85 # 1/3 fixed, 1 partial
root_causes_found: 1
files_modified: 4
After agents return:
**Domain:** Fix agent-tool-abort.test.ts failures
**Scope:** Only this file and its immediate dependencies
**Success Metric:** Fix all 3 failing tests
Failing tests:
1. "should abort tool with partial output" → expects 'interrupted'
2. "should handle mixed completed/aborted" → timing issue
3. "should properly track pendingToolCount" → gets 0, expects 3
Your task:
1. Identify root causes
2. Fix with minimal code changes
3. Verify all 3 tests pass
Return: Summary of root causes, changes made, final test results
Per-domain score:
agent_success_score = (
(problems_fixed / problems_identified) * 0.6 +
(test_pass_rate) * 0.3 +
(1.0 if no_conflicts else 0.0) * 0.1
)
Range: 0.00-1.00
Overall parallel efficiency:
efficiency_score = sequential_time_cost / actual_parallel_time
Range: 1.0x (no benefit) to Nx (benefit)
❌ Too broad scope (fix everything) ✅ Specific scope (one test file)
❌ No domain metrics ✅ Track agent_success_score per domain
❌ Skip conflict detection ✅ MCP checks file modifications across agents
❌ Sequential dispatch (defeats purpose) ✅ Launch all agents concurrently with wave_id
Track across sessions:
Use historical data to:
With: testing-skills-with-subagents (each agent tests independently) MCP: Tracks file modifications, test results, timing Serena: Metrics for pattern learning
From debugging session (2025):