From rpikit
Enforces five-step verification (identify/run/read/verify/claim) of commands like npm test/build/lint before claiming code complete, fixed, or passing. Builds trust via evidence.
npx claudepluginhub bostonaholic/rpikit --plugin rpikitThis skill uses the workspace's default tool permissions.
Evidence before claims, always. No completion claims without fresh verification.
Enforces 5-step verification process (identify, execute, read, verify commands like tests/lint/build) before claiming work complete, fixed, or passing.
Enforces running verification commands like tests, build, lint, type-check and confirming output before claiming task completion, fixes, or commit/PR readiness.
Enforces running verification commands and providing evidence before claiming coding task completion. Blocks unverified success reports.
Share bugs, ideas, or general feedback.
Evidence before claims, always. No completion claims without fresh verification.
Claiming work is done without verification breaks trust and wastes time. This skill enforces running verification commands and confirming output before any success claim. "It should work" is not verification.
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE.
Every claim must be backed by evidence you just observed. Not evidence from earlier. Not evidence you expect. Evidence you just saw.
Before any completion claim, complete these steps:
What command proves your claim?
Claim: "Tests pass"
Command: npm test (or project's test command)
Claim: "Build succeeds"
Command: npm run build (or project's build command)
Claim: "Lint is clean"
Command: npm run lint (or project's lint command)
Claim: "Bug is fixed"
Command: The reproduction steps that previously failed
Execute the command freshly. Not from cache. Not from memory.
Run the command NOW.
Wait for it to complete.
Do not proceed until finished.
Read the COMPLETE output.
- Exit code (0 = success)
- All output lines, not just the last one
- Any warnings (not just errors)
- Summary statistics if provided
Confirm the output supports your claim.
Claim: "Tests pass"
Verify: Exit code 0, "X tests passed", no failures
Claim: "Build succeeds"
Verify: Exit code 0, output files created, no errors
Claim: "Bug is fixed"
Verify: Previous failure no longer occurs
Only NOW make your completion claim.
"Tests pass" - after seeing test output showing success
"Build succeeds" - after seeing build complete without errors
"Implementation complete" - after all verifications pass
What to run: Project test command
What to verify:
- Exit code 0
- All tests pass (not just "some tests ran")
- No skipped tests (unless intentional)
- No test warnings
What to run: Project lint command
What to verify:
- Exit code 0
- No errors
- No new warnings (compare to baseline)
What to run: Project build command
What to verify:
- Exit code 0
- Output artifacts created
- No compilation errors
- No type errors
What to run: Steps that reproduced the bug
What to verify:
- Bug no longer occurs
- Related functionality still works
- Regression test added and passes
What to run: Independent verification of agent's claim
What to verify:
- Agent's claimed output exists
- Output is correct (not just present)
- No silent failures masked
| Thought | Reality |
|---|---|
| "It should pass" | Run it and see |
| "I'm confident it works" | Confidence isn't evidence |
| "I already ran it earlier" | Run it again, freshly |
| "The change was small" | Small changes can break things |
| "I'll verify later" | Verify now or don't claim |
| "The agent said it passed" | Verify agent's claims independently |
| "It worked on my machine" | Run it in the target environment |
| "I'm tired of running tests" | Fatigue doesn't excuse skipping verification |
This skill serves as the final gate before completion claims:
Plan step complete? → Run step verification → Claim step done
Phase complete? → Run phase verification → Claim phase done
Implementation complete? → Run all verifications → Claim done
Never mark a step complete without verification evidence.
Run before committing:
1. All tests pass
2. Lint is clean
3. Build succeeds
4. Type check passes (if applicable)
Run before creating PR:
1. All verifications from "Before Commits"
2. Branch is up to date with base
3. No merge conflicts
4. CI would pass (simulate locally if possible)
Run before claiming bug is fixed:
1. Reproduction steps no longer trigger bug
2. Regression test added and passes
3. Related functionality still works
4. All other tests still pass
Tests: npm test
Lint: npm run lint
Build: npm run build
Types: npm run typecheck (or tsc --noEmit)
Tests: pytest
Lint: ruff check . or flake8
Types: mypy .
Format: ruff format --check . or black --check .
Tests: bundle exec rspec
Lint: bundle exec rubocop
Tests: go test ./...
Lint: golangci-lint run
Build: go build ./...
Tests: cargo test
Lint: cargo clippy
Build: cargo build
Format: cargo fmt --check
Wrong: Run only the test file you changed Right: Run full test suite to catch regressions
Wrong: Trust previous run results Right: Run fresh each time before claiming
Wrong: "I know this works, no need to verify" Right: Verify anyway, confidence isn't evidence
Wrong: Agent said tests pass, so they pass Right: Run tests yourself to verify
Wrong: Skip verification because you're almost done Right: Final verification is most important