From claudekit
Enforces 5-step verification process (identify, execute, read, verify commands like tests/lint/build) before claiming work complete, fixed, or passing.
npx claudepluginhub duthaho/claudekit --plugin claudekitThis skill uses the workspace's default tool permissions.
- Before claiming tests pass
Enforces five-step verification (identify/run/read/verify/claim) of commands like npm test/build/lint before claiming code complete, fixed, or passing. Builds trust via evidence.
Enforces running fresh verification commands like tests, builds, or linters before claiming work is complete, fixed, passing, committing, or creating PRs.
Enforces running verification commands like tests, build, lint, type-check and confirming output before claiming task completion, fixes, or commit/PR readiness.
Share bugs, ideas, or general feedback.
Determine the command that proves your assertion:
Claim: "Tests pass"
Verification command: npm test
Claim: "Build succeeds"
Verification command: npm run build
Claim: "Linting passes"
Verification command: npm run lint
Run the command fully and freshly:
# Don't rely on cached results
# Don't assume previous run is still valid
npm test
Read the complete output and exit codes:
# Check output carefully
# Don't skim - read every line
# Note exit code (0 = success)
Confirm the output matches your claim:
Claim: "All tests pass"
Output shows: "42 passing, 0 failing"
Verification: ✓ Claim is accurate
Only now make the claim, with evidence:
✓ All tests pass (42 passing, verified at 2024-01-15 14:30)
# Run test command
npm test
# Verify in output:
# - Zero failures
# - Expected test count
# - No skipped tests (unless intentional)
Not valid: "Tests should pass" without running them
# Run linter completely
npm run lint
# Verify in output:
# - Zero errors
# - Zero warnings (or acceptable known warnings)
Not valid: Using lint as proxy for build success
# Run build command
npm run build
# Verify:
# - Exit code 0
# - Build artifacts created
# - No errors in output
Not valid: Assuming lint passing means build passes
# Step 1: Reproduce original bug
npm test -- --grep "failing test"
# Should fail
# Step 2: Apply fix
# Step 3: Verify fix works
npm test -- --grep "failing test"
# Should pass
Not valid: Claiming fix works without reproducing original failure
Complete red-green cycle required:
# 1. Write test, run it
npm test # Should PASS with new test
# 2. Revert the fix
git stash
# 3. Run test again
npm test # Should FAIL (proves test catches the bug)
# 4. Restore fix
git stash pop
# 5. Run test again
npm test # Should PASS
## Original Requirements
1. User can login with email
2. User can reset password
3. Session expires after 24 hours
## Verification Checklist
- [x] Requirement 1: Tested login flow manually + unit tests
- [x] Requirement 2: Tested reset flow manually + integration test
- [x] Requirement 3: Verified SESSION_TIMEOUT=86400 in config + test
Don't trust agent reports blindly:
# Agent claims: "Fixed the bug in user.ts"
# Verify independently:
git diff src/user.ts # Check actual changes
npm test # Verify tests pass
Never use these phrases without verification:
| Forbidden | Why |
|---|---|
| "should work" | Implies uncertainty |
| "probably fixed" | Not verified |
| "seems to pass" | Didn't read output |
| "I think it's done" | Guessing |
| "Great!" (before checking) | Premature celebration |
| "Done!" (before verification) | Unverified claim |
| Instead Say | After |
|---|---|
| "Tests pass" | Running tests, seeing 0 failures |
| "Build succeeds" | Running build, exit code 0 |
| "Bug is fixed" | Reproducing bug, verifying fix |
BAD: "I ran one test and it passed"
GOOD: "Full test suite passes (42/42)"
BAD: "Tests passed earlier"
GOOD: "Tests pass now (just ran)"
BAD: "This is a small change, no need to verify"
GOOD: "Small change, but verified: tests pass, lint clean"
BAD: Agent said it's fixed, so it's fixed
GOOD: Agent said it's fixed, I verified by running tests
Use before claiming completion:
## Task: [Task Name]
### Verification Steps
- [ ] Tests run: `npm test`
- Result: [X passing, Y failing]
- [ ] Lint passes: `npm run lint`
- Result: [No errors]
- [ ] Build succeeds: `npm run build`
- Result: [Exit code 0]
- [ ] Requirements met:
- [ ] Requirement 1: [How verified]
- [ ] Requirement 2: [How verified]
### Evidence
[Paste relevant output or screenshots]
### Conclusion
✓ Task complete, all verifications passed
pytest -v --cov=src --cov-report=term-missing # Tests + coverage
ruff check . # Linting
mypy src/ --strict # Type checking
# All-in-one
pytest -v && ruff check . && mypy src/
npm test # Tests
npm run lint # Linting
npx tsc --noEmit # Type checking
npm run build # Build (catches import issues)
# All-in-one
npm test && npm run lint && npm run build
npm test # Tests
next lint # Linting
npx tsc --noEmit # Type checking
next build # Build (catches SSR + RSC issues)
# All-in-one
npm test && next lint && next build
npx vitest run # Tests
npm run lint # Linting
npx tsc --noEmit # Type checking
npm run build # Build
# All-in-one
npx vitest run && npm run lint && npm run build
When claiming completion, paste evidence in this format:
**Verification Evidence:**
- Tests: `pytest -v` → 47 passed, 0 failed
- Lint: `ruff check .` → no issues
- Types: `mypy src/` → Success: no issues found
**Verification Evidence:**
- Tests: `npm test` → 23 passed, 0 failed
- Lint: `npm run lint` → no warnings
- Build: `npm run build` → compiled successfully
test-driven-development -- TDD naturally produces verifiable work; verification confirms the TDD cycle was followed correctlysystematic-debugging -- After debugging, verification ensures the fix actually resolves the issuerequesting-code-review -- Verification should happen before requesting review to avoid wasting reviewer time on broken code