From shannon
Use before skill deployment to verify pressure resistance via TDD RED-GREEN-REFACTOR cycle with Serena metrics tracking - measures compliance score (0.00-1.00) across pressure scenarios
npx claudepluginhub krzemienski/shannon-framework --plugin shannonThis skill uses the workspace's default tool permissions.
**TDD for process documentation with quantitative metrics.**
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
TDD for process documentation with quantitative metrics.
Red-Green-Refactor applies to skills: (1) Run baseline WITHOUT skill, (2) Write skill addressing failures, (3) Close loopholes until bulletproof. Shannon enhancement adds Serena metrics tracking for compliance scoring.
Compliance Scoring (0.00-1.00):
Test skills enforcing discipline (TDD, code review, testing) that:
Don't test: Reference skills, pure documentation, skills without rules.
| Phase | Action | Shannon Metric |
|---|---|---|
| RED | Run scenario WITHOUT skill | baseline_failures: count violations |
| GREEN | Write skill, run WITH skill | compliance_score: % correct choices |
| REFACTOR | Close loopholes | loophole_count: new rationalizations found |
| Verify | Re-test scenarios | final_score: 0.00-1.00 |
# Serena metric: baseline_failures
- Run pressure scenarios WITHOUT skill
- Document rationalizations verbatim
- Log failure patterns to Serena
pressure_type: "sunk_cost|time|authority|exhaustion|social"
rationalization: "exact words agent used"
scenario_complexity: 1-3 pressures
Pressure types to combine:
Write skill addressing ONLY observed baseline failures.
Run same scenarios WITH skill. Document:
compliance_score = (correct_choices / total_scenarios) * 1.0
Range: 0.00-1.00
For each new rationalization agent creates:
# Serena metric: loophole_count
- Capture exact wording
- Add explicit negation
- Update rationalization table
- Re-test until compliant
Meta-testing: Ask agent "How could this be clearer?" Responses:
Compliance Score Calculation:
compliance_score = (
(correct_choices / total_scenarios) * 0.6 +
(0 if new_rationalizations found else 1.0) * 0.3 +
(1.0 if meta_test_passes else 0.0) * 0.1
)
Pattern Learning (Serena):
Baseline (compliance_score: 0.33):
First iteration (compliance_score: 0.67):
Second iteration (compliance_score: 0.95):
❌ Skip baseline testing (skipping RED) ✅ Always watch it fail first
❌ Weak pressure (single pressure only) ✅ Combine 3+ pressures
❌ Vague fixes ("Don't cheat") ✅ Explicit negations ("Don't keep as reference")
❌ Stop after first pass ✅ Continue refactor until compliance_score > 0.85
MCP Pattern: Log all metrics to Serena for pattern learning across multiple skills.
Subagent workflow: Use with dispatching-parallel-agents to test multiple skills simultaneously.
Testing TDD skill itself (2025):