From shannon
Use for implementation plans - dispatches fresh subagent per task with quality scoring (0.00-1.00), code review gates, Serena pattern learning, and MCP tracking
npx claudepluginhub krzemienski/shannon-framework --plugin shannonThis skill uses the workspace's default tool permissions.
**Execute plans with fresh subagent per task + quantitative quality gates.**
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Execute plans with fresh subagent per task + quantitative quality gates.
Each task: Subagent implementation → Code review → Quality score → Advance if > 0.80.
Shannon enhancement: Numerical quality scoring, pattern learning from task history, MCP work tracking.
Same-session task execution:
Don't use when:
Per-task scoring:
task_quality_score = (
(implementation_completeness) * 0.4 +
(test_coverage) * 0.35 +
(code_review_score) * 0.25
)
Thresholds:
Gate rule: Only advance to next task if task_quality_score > 0.80
# Serena metric initialization
plan_metadata = {
plan_file: "path/to/plan.md",
total_tasks: N,
tasks_completed: 0,
cumulative_quality_score: 0.0,
estimated_completion_time: "X hours"
}
Create TodoWrite with all tasks, all in "pending" state.
Mark task as in_progress. Dispatch:
**Task N:** [task_name]
Implementation scope: [read task N from plan]
Requirements:
1. Implement exactly what task specifies
2. Write tests (TDD if task requires)
3. Verify all tests pass
4. Commit with conventional message
5. Report: what you implemented, test results, files changed
Work directory: [path]
Subagent reports implementation summary.
Dispatch code-reviewer with metrics:
# Review metrics (Serena)
review_context = {
task_number: N,
what_implemented: "[subagent report]",
plan_requirement: "[task N from plan]",
base_commit: "[SHA before task]",
head_commit: "[SHA after task]",
test_results: "[pass/fail counts]"
}
Reviewer returns: Strengths, Issues (Critical/Important/Minor), Assessment.
Calculate code_review_score:
code_review_score = (
0.0 if critical_issues > 0 else (
1.0 - (important_issues * 0.15) - (minor_issues * 0.05)
)
)
task_quality_score = (
implementation_completeness * 0.4 + # Did they do what task asked?
test_coverage * 0.35 + # Tests > 80% code coverage?
code_review_score * 0.25 # Review assessment score
)
Log to Serena with timestamp.
IF task_quality_score > 0.80:
✓ ADVANCE: Mark complete, next task
ELSE:
⚠ NEEDS FIXES: Dispatch fix subagent
"Fix issues from review: [list critical/important]"
Re-review and recalculate score
Loop until > 0.80
After all tasks pass gates:
# Calculate cumulative metrics
cumulative_quality = average(all_task_quality_scores)
tasks_requiring_rework = count(tasks with score < 0.85 initially)
# Log to Serena for pattern learning
completion_metrics = {
cumulative_quality_score: X.XX,
tasks_completed: N,
total_rework_cycles: R,
average_quality_trend: "improving|stable|declining"
}
Dispatch final code-reviewer: "Review all completed tasks for integration readiness."
After final review passes:
I'm using the finishing-a-development-branch skill to complete this work.
Follow finishing-a-development-branch workflow.
Track across plans:
Use historical patterns to:
❌ Skip code review between tasks ✅ Review every task before advancing
❌ Ignore task_quality_score, advance anyway ✅ Enforce quality gate > 0.80
❌ Don't track metrics ✅ Log all scores to Serena for learning
❌ Parallel implementation subagents (conflicts) ✅ One subagent at a time, sequential
Never:
Requires:
With:
Sample implementation (4 tasks):