Use before claiming work complete, fixed, or passing - requires running verification commands and confirming output; evidence before assertions always
/plugin marketplace add withzombies/hyperpowers/plugin install withzombies-hyper@withzombies-hyperThis skill inherits all available tools. When active, it can use any tool Claude has access to.
<skill_overview> Claiming work is complete without verification is dishonesty, not efficiency. Evidence before claims, always. </skill_overview>
<rigidity_level> LOW FREEDOM - NO exceptions. Run verification command, read output, THEN make claim.
No shortcuts. No "should work". No partial verification. Run it, prove it. </rigidity_level>
<quick_reference>
| Claim | Verification Required | Not Sufficient |
|---|---|---|
| Tests pass | Run full test command, see 0 failures | Previous run, "should pass" |
| Build succeeds | Run build, see exit 0 | Linter passing |
| Bug fixed | Test original symptom, passes | Code changed |
| Task complete | Check all success criteria, run verifications | "Implemented bd-3" |
| Epic complete | bd list --status open --parent bd-1 shows 0 | "All tasks done" |
Iron Law: NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
Use test-runner agent for: Tests, pre-commit hooks, commits (keeps verbose output out of context)
</quick_reference>
<when_to_use> ALWAYS before:
Red flags you need this:
<the_process>
Before claiming ANY status:
What command proves this claim?
Execute the full command (fresh, complete).
For tests/hooks/commits: Use hyperpowers:test-runner agent
For other commands: Run directly and capture output
Full output, check exit code, count failures.
Does output confirm the claim?
Make the claim.
Skip any step = lying, not verifying
</the_process>
<examples> <example> <scenario>Developer claims tests pass without running them</scenario> <code> Developer modifies authentication logic.Developer thinks: "This fix is straightforward, tests should pass now"
Developer writes: "Great! All tests passing. The bug is fixed."
[No test command run, no output shown] </code>
<why_it_fails> No evidence:
Why dangerous:
# Dispatch test-runner agent
"Run: cargo test"
Agent returns:
Summary: 33 passed, 1 failed
Failures:
- test_login_with_expired_token: assertion failed
Expected: Err(AuthError::TokenExpired)
Got: Ok(User { id: 123 })
State actual status:
Tests: 33 passed, 1 failed.
Failure: test_login_with_expired_token still fails.
The fix didn't handle expired tokens.
Investigating...
Result: Accurate status, bug caught before committing. </correction> </example>
<example> <scenario>Developer closes bd task without verifying success criteria</scenario> <code> bd-3 success criteria: - [ ] All functions fully implemented (no stubs, no TODOs) - [ ] Tests written and passing: test_generate_token, test_verify_token - [ ] Pre-commit hooks passDeveloper implements token functions.
Developer thinks: "I implemented everything, task complete"
Runs: bd close bd-3
[No verification commands run] </code>
<why_it_fails> Success criteria not verified:
Result: Incomplete task closed, issues discovered later. </why_it_fails>
<correction> **Verify each criterion:**# 1. Check for TODOs
rg "TODO|FIXME|todo!|unimplemented!" src/
# Output: [nothing found] ✅
# 2. Run specific tests
cargo test test_generate_token test_verify_token
# Output: test result: ok. 2 passed ✅
# 3. Run pre-commit hooks (via test-runner agent)
"Run: cargo fmt --check && cargo clippy && cargo test"
Agent returns:
All checks passed:
- cargo fmt: no formatting issues
- cargo clippy: no warnings
- cargo test: 34/34 passed
THEN close task:
bd close bd-3
Result: Task actually complete, all criteria met with evidence. </correction> </example>
<example> <scenario>Developer claims build works based on linter passing</scenario> <code> Developer fixes TypeScript errors flagged by linter.Runs: npm run lint Output: ✅ No issues found
Developer writes: "Build passes! Ready to commit."
[No build command run] </code>
<why_it_fails> Linter ≠ compiler:
Why it happens: Partial verification, extrapolation. </why_it_fails>
<correction> **Run actual build:**npm run build
Output:
ERROR in src/auth/service.ts:23:15
TS2304: Cannot find name 'validateToken'
Did you mean 'verifyToken'?
State actual status:
Linter passes, but build fails.
Error: validateToken not defined (should be verifyToken).
Fixing...
After fix, run build again:
npm run build
# Output: ✓ built in 1.2s
Now can claim:
Build passes. [Evidence: npm run build exit 0]
Ready to commit.
Result: Actual build status verified, error caught. </correction> </example>
</examples><critical_rules>
No claims without fresh verification → Run command, see output, THEN claim
Use test-runner agent for verbose commands → Tests, hooks, commits
git commit or cargo test directly if output is verboseVerify ALL success criteria → Not just "tests pass"
Evidence in every claim → Show the output
All of these mean: Stop, run verification:
If your project uses pre-commit hooks enforcing tests:
git checkout <sha> && pytest to verify</critical_rules>
<verification_checklist>
Before claiming tests pass:
Before claiming build succeeds:
Before closing bd task:
Before closing bd epic:
bd list --status open --parent bd-1bd dep tree bd-1</verification_checklist>
<integration>This skill calls:
This skill is called by:
Agents used:
When stuck:
Verification patterns:
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.