Use when about to claim work is complete, fixed, or passing, before committing, creating PRs, or moving to the next task
Ensures completion claims are backed by fresh verification evidence before committing or moving to tasks.
npx claudepluginhub harmaalbers/claude-requirements-frameworkThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Claiming work is complete without verification is dishonesty, not efficiency.
Core principle: Evidence before claims, always.
Violating the letter of this rule is violating the spirit of this rule.
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
If you haven't run the verification command in this message, you cannot claim it passes.
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
- If NO: State actual status with evidence
- If YES: State claim WITH evidence
5. ONLY THEN: Make the claim
Skip any step = lying, not verifying
| Claim | Requires | Not Sufficient |
|---|---|---|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | Linter output: 0 errors | Partial check, extrapolation |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |
| Excuse | Reality |
|---|---|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence ≠ evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter ≠ tests |
| "Agent said success" | Verify independently |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |
Tests:
RUN: pytest tests/ -v
SEE: 34/34 pass, 0 fail
THEN: "All 34 tests pass"
NEVER: "Should pass now" / "Looks correct"
Regression tests (TDD Red-Green):
RUN: Write test → Run (pass) → Revert fix → Run (MUST FAIL) → Restore → Run (pass)
NEVER: "I've written a regression test" (without red-green verification)
Build:
RUN: python -m py_compile src/module.py
SEE: exit 0, no output
THEN: "Build passes"
NEVER: "Linter passed" (linter doesn't check all compilation)
Requirements:
RUN: Re-read plan → Create checklist → Verify each → Report gaps or completion
NEVER: "Tests pass, phase complete"
Agent delegation:
RUN: Agent reports success → Check git diff → Verify changes → Report actual state
NEVER: Trust agent report blindly
ALWAYS before:
When this skill completes, it auto-satisfies the verification_evidence requirement. This integrates with the handle-stop.py hook — if verification_evidence is enabled, the framework will block session completion until fresh verification has been run.
No shortcuts for verification.
Run the command. Read the output. THEN claim the result.
This is non-negotiable.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.