Use for implementation plans - dispatches fresh subagent per task with quality scoring (0.00-1.00), code review gates, Serena pattern learning, and MCP tracking
Executes implementation plans with fresh subagents per task, enforcing quality gates (score > 0.80) before advancing. Uses numerical scoring, code reviews, and pattern learning to ensure each task meets completeness, test coverage, and review standards.
/plugin marketplace add krzemienski/shannon-framework/plugin install shannon@shannon-frameworkThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Execute plans with fresh subagent per task + quantitative quality gates.
Each task: Subagent implementation → Code review → Quality score → Advance if > 0.80.
Shannon enhancement: Numerical quality scoring, pattern learning from task history, MCP work tracking.
Same-session task execution:
Don't use when:
Per-task scoring:
task_quality_score = (
(implementation_completeness) * 0.4 +
(test_coverage) * 0.35 +
(code_review_score) * 0.25
)
Thresholds:
Gate rule: Only advance to next task if task_quality_score > 0.80
# Serena metric initialization
plan_metadata = {
plan_file: "path/to/plan.md",
total_tasks: N,
tasks_completed: 0,
cumulative_quality_score: 0.0,
estimated_completion_time: "X hours"
}
Create TodoWrite with all tasks, all in "pending" state.
Mark task as in_progress. Dispatch:
**Task N:** [task_name]
Implementation scope: [read task N from plan]
Requirements:
1. Implement exactly what task specifies
2. Write tests (TDD if task requires)
3. Verify all tests pass
4. Commit with conventional message
5. Report: what you implemented, test results, files changed
Work directory: [path]
Subagent reports implementation summary.
Dispatch code-reviewer with metrics:
# Review metrics (Serena)
review_context = {
task_number: N,
what_implemented: "[subagent report]",
plan_requirement: "[task N from plan]",
base_commit: "[SHA before task]",
head_commit: "[SHA after task]",
test_results: "[pass/fail counts]"
}
Reviewer returns: Strengths, Issues (Critical/Important/Minor), Assessment.
Calculate code_review_score:
code_review_score = (
0.0 if critical_issues > 0 else (
1.0 - (important_issues * 0.15) - (minor_issues * 0.05)
)
)
task_quality_score = (
implementation_completeness * 0.4 + # Did they do what task asked?
test_coverage * 0.35 + # Tests > 80% code coverage?
code_review_score * 0.25 # Review assessment score
)
Log to Serena with timestamp.
IF task_quality_score > 0.80:
✓ ADVANCE: Mark complete, next task
ELSE:
⚠ NEEDS FIXES: Dispatch fix subagent
"Fix issues from review: [list critical/important]"
Re-review and recalculate score
Loop until > 0.80
After all tasks pass gates:
# Calculate cumulative metrics
cumulative_quality = average(all_task_quality_scores)
tasks_requiring_rework = count(tasks with score < 0.85 initially)
# Log to Serena for pattern learning
completion_metrics = {
cumulative_quality_score: X.XX,
tasks_completed: N,
total_rework_cycles: R,
average_quality_trend: "improving|stable|declining"
}
Dispatch final code-reviewer: "Review all completed tasks for integration readiness."
After final review passes:
I'm using the finishing-a-development-branch skill to complete this work.
Follow finishing-a-development-branch workflow.
Track across plans:
Use historical patterns to:
❌ Skip code review between tasks ✅ Review every task before advancing
❌ Ignore task_quality_score, advance anyway ✅ Enforce quality gate > 0.80
❌ Don't track metrics ✅ Log all scores to Serena for learning
❌ Parallel implementation subagents (conflicts) ✅ One subagent at a time, sequential
Never:
Requires:
With:
Sample implementation (4 tasks):
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.