Use when about to claim work is complete, fixed, passing, or successful, before committing or creating PRs - requires running verification commands and confirming output before making ANY success claims; evidence before assertions always, no exceptions
Before claiming any work is complete, this skill requires running fresh verification commands and confirming quantitative results. It triggers when you're about to commit, create PRs, or mark tasks as done—forcing evidence-based claims over assumptions.
/plugin marketplace add krzemienski/shannon-framework/plugin install shannon@shannon-frameworkThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Claiming work is complete without verification is dishonesty, not efficiency.
Core principle: Evidence before claims, always.
Violating the letter of this rule is violating the spirit of this rule.
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
If you haven't run the verification command in this message, you cannot claim it passes.
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
- If NO: State actual status with evidence
- If YES: State claim WITH evidence
5. ONLY THEN: Make the claim
Skip any step = lying, not verifying
| Claim | Requires | Not Sufficient |
|---|---|---|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | Linter output: 0 errors | Partial check, extrapolation |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |
| All validations pass | 3-tier validation output | Individual tiers passed separately |
Shannon-specific: Projects have 3-tier validation gates:
Purpose: Code compiles, dependencies resolve, syntax correct
Common commands:
tsc --noEmitruff check . or flake8 .go vet ./...cargo checkRequired output: 0 errors, 0 warnings (zero tolerance)
Purpose: Tests pass, builds succeed, artifacts created
Common commands:
npm test, pytest, go test ./..., cargo testnpm run build, cargo build --releaseRequired output:
Purpose: Real system behavior verified (NO MOCKS)
Shannon requirement: Tests must use real systems:
Required evidence:
Based on work type:
Auto-detect gates:
# Shannon detects validation gates from project
gates = detect_validation_gates(project_path)
# Returns: {
# "tier1": ["tsc --noEmit", "ruff check ."],
# "tier2": ["npm test", "npm run build"],
# "tier3": ["npm run test:e2e"]
# }
MANDATORY: Fresh execution in this message.
Not acceptable:
REQUIRED:
Shannon requirement: Quantitative reporting.
Good examples:
✅ Tier 1 (Flow): tsc --noEmit
Exit code: 0
Errors: 0
Warnings: 0
Status: PASS
✅ Tier 2 (Artifacts): npm test
Exit code: 0
Tests: 34/34 pass
Failures: 0
Skipped: 0
Duration: 2.3s
Status: PASS
✅ Tier 3 (Functional): npm run test:e2e
Exit code: 0
Browser tests: 12/12 pass
Screenshots captured: 12
Real browser: Chrome
Status: PASS (NO MOCKS verified)
Bad examples:
❌ "Tests passed"
❌ "Build looks good"
❌ "Everything works"
❌ "Linter is clean"
Format:
## Verification Complete
[Evidence from steps above]
**Claim**: All validations pass. Work is complete.
**Evidence**: 3/3 tiers verified above.
**Ready for**: Commit / PR / Deployment
If you catch yourself about to:
ALL OF THESE MEAN: STOP. Run verifications. THEN claim.
| Excuse | Reality |
|---|---|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence ≠ evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter ≠ compiler |
| "Agent said success" | Verify independently |
| "I'm tired" | Exhaustion ≠ excuse |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |
MANDATORY for Tier 3: Verify tests use real systems.
Check for violations:
# Scan test files for mock imports
violations = grep_r("import.*mock|from.*mock|jest.mock", "tests/")
if violations:
ERROR: "MOCK VIOLATION: Tests use mocking"
EVIDENCE: [list violations]
STATUS: FAIL (Shannon NO MOCKS requirement violated)
Acceptable real systems:
await page.goto(...))ios_simulator.launch_app(...))await db.connect(...))await fetch(actual_url))NOT acceptable:
jest.mock('puppeteer')unittest.mock.patch(...)sinon.stub(...)MANDATORY for bug fixes: Prove test catches the bug.
RED-GREEN cycle:
1. Write test reproducing bug
2. Run test → MUST FAIL with bug present
3. Fix bug
4. Run test → MUST PASS with fix
5. Revert fix (temporarily)
6. Run test → MUST FAIL again (proves test works)
7. Restore fix
8. Run test → MUST PASS (final verification)
Required evidence:
## Regression Test Verification
✅ RED: Test failed with bug present
Output: [failure message matching bug symptom]
✅ GREEN: Test passed with fix
Output: [success]
✅ RED (reverified): Test failed with fix reverted
Output: [same failure as step 1]
✅ GREEN (final): Test passed with fix restored
Output: [success]
**Claim**: Regression test verified via RED-GREEN cycle.
**Evidence**: 4/4 phases above.
Format for Shannon 3-tier validation:
## Shannon Validation Gates
| Tier | Gate | Command | Result | Evidence |
|------|------|---------|--------|----------|
| 1 (Flow) | TypeScript | `tsc --noEmit` | ✅ PASS | 0 errors, 0 warnings |
| 1 (Flow) | Linter | `ruff check .` | ✅ PASS | 0 errors, 0 warnings |
| 2 (Artifacts) | Tests | `pytest` | ✅ PASS | 45/45 pass, 0 failures |
| 2 (Artifacts) | Build | `npm run build` | ✅ PASS | exit 0, dist/ created |
| 3 (Functional) | E2E | `npm run test:e2e` | ✅ PASS | 8/8 pass, real browser |
**Overall**: 5/5 gates PASS
**NO MOCKS**: Verified (Tier 3 uses Puppeteer)
**Status**: READY FOR COMMIT
Commands that require this skill:
/shannon:do: Before claiming task complete:
1. Run all validation gates
2. Verify 3/3 tiers pass
3. Check NO MOCKS compliance
4. THEN claim completion
/shannon:wave: Before marking wave complete:
1. Run validation gates for wave
2. Verify all tasks' tests pass
3. Check NO MOCKS compliance
4. THEN report wave success
/shannon:execute-plan: After each batch:
1. Run gates for batch
2. Verify tier results
3. Quantitative reporting
4. THEN proceed to next batch
This skill is prerequisite for:
Complementary skills:
Scenario: Bug fix for authentication
Work done:
Verification:
## Verification Report
### Tier 1: Flow Validation
**TypeScript Check**:
```bash
$ tsc --noEmit
# (no output)
Exit code: 0 Errors: 0 Warnings: 0 ✅ PASS
Linter Check:
$ ruff check .
All checks passed!
Exit code: 0 Issues: 0 ✅ PASS
Unit Tests:
$ pytest tests/
====== 67 passed in 3.42s ======
Exit code: 0 Tests: 67/67 pass Failures: 0 Duration: 3.42s ✅ PASS
Build:
$ npm run build
Build successful. Output: dist/
Exit code: 0 Artifacts: dist/ created (2.3 MB) ✅ PASS
E2E Tests (NO MOCKS):
$ npm run test:e2e:auth
✓ User login with valid credentials (browser)
✓ User login with invalid credentials (browser)
✓ Session persistence (browser)
✓ Logout flow (browser)
4 passed (12.5s)
Exit code: 0 Tests: 4/4 pass Real browser: Chromium via Puppeteer Screenshots: 4 captured ✅ PASS (NO MOCKS verified)
RED phase (bug present):
$ git stash # Revert fix temporarily
$ pytest tests/test_auth_middleware.py::test_empty_token
FAILED: Expected 401, got 500
✅ Test correctly fails with bug
GREEN phase (fix applied):
$ git stash pop # Restore fix
$ pytest tests/test_auth_middleware.py::test_empty_token
PASSED
✅ Test passes with fix
| Category | Result | Evidence |
|---|---|---|
| Tier 1 (Flow) | 2/2 PASS | tsc + ruff clean |
| Tier 2 (Artifacts) | 2/2 PASS | 67/67 tests, build succeeded |
| Tier 3 (Functional) | 1/1 PASS | 4/4 E2E, real browser |
| Regression Test | VERIFIED | RED-GREEN cycle complete |
| NO MOCKS | VERIFIED | Puppeteer used, no mocks |
Overall: 5/5 validations PASS Status: READY FOR COMMIT
Claim: Bug fix verified complete. All gates pass. Ready for deployment. Evidence: Complete verification report above.
## The Bottom Line
**No shortcuts for verification.**
Run the command. Read the output. THEN claim the result.
This is non-negotiable.
**For mission-critical work**: False completion claims = production failures.
If you follow verification for code reviews, follow it for ALL completion claims.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.