Use when about to claim work is complete, fixed, or passing, before committing, creating PRs, or moving to the next task
npx claudepluginhub harmaalbers/claude-requirements-framework --plugin requirements-frameworkThis skill uses the workspace's default tool permissions.
Claiming work is complete without verification is dishonesty, not efficiency.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Claiming work is complete without verification is dishonesty, not efficiency.
Core principle: Evidence before claims, always.
Violating the letter of this rule is violating the spirit of this rule.
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
If you haven't run the verification command in this message, you cannot claim it passes.
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
- If NO: State actual status with evidence
- If YES: State claim WITH evidence
5. ONLY THEN: Make the claim
Skip any step = lying, not verifying
| Claim | Requires | Not Sufficient |
|---|---|---|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | Linter output: 0 errors | Partial check, extrapolation |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |
| Excuse | Reality |
|---|---|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence ≠ evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter ≠ tests |
| "Agent said success" | Verify independently |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |
Tests:
RUN: pytest tests/ -v
SEE: 34/34 pass, 0 fail
THEN: "All 34 tests pass"
NEVER: "Should pass now" / "Looks correct"
Regression tests (TDD Red-Green):
RUN: Write test → Run (pass) → Revert fix → Run (MUST FAIL) → Restore → Run (pass)
NEVER: "I've written a regression test" (without red-green verification)
Build:
RUN: python -m py_compile src/module.py
SEE: exit 0, no output
THEN: "Build passes"
NEVER: "Linter passed" (linter doesn't check all compilation)
Requirements:
RUN: Re-read plan → Create checklist → Verify each → Report gaps or completion
NEVER: "Tests pass, phase complete"
Agent delegation:
RUN: Agent reports success → Check git diff → Verify changes → Report actual state
NEVER: Trust agent report blindly
ALWAYS before:
When this skill completes, it auto-satisfies the verification_evidence requirement. This integrates with the handle-stop.py hook — if verification_evidence is enabled, the framework will block session completion until fresh verification has been run.
No shortcuts for verification.
Run the command. Read the output. THEN claim the result.
This is non-negotiable.