Use automatically during development workflows when making claims about tests, builds, verification, or code quality requiring concrete evidence to ensure trust through transparency.
Automatically runs verification commands (tests, builds, linting) and shows complete output when making claims about code quality, builds, or test results. Triggers when you need to prove assertions about verification, builds, or agent work are actually true.
/plugin marketplace add TheBushidoCollective/han/plugin install jutsu-scratch@hanThis skill is limited to using the following tools:
Show, don't tell. Never make claims about code verification without providing concrete evidence.
Trust through transparency. Every assertion about code quality, test results, builds, or verification must be backed by actual command output, not summaries or assumptions.
When implementing features:
USE Write/Edit tools to make changes
Show tool results
Verify changes exist
ls -la /path/to/file.exRemember: If you didn't use Write/Edit tools, it didn't happen.
NEVER EVER trust agent completion reports without verification.
This is a zero-tolerance rule. Agent reports are NOT proof - they are claims requiring verification.
When you delegate work to a subagent:
After ANY agent completes, you MUST verify work was actually done:
# 1. Verify files were actually modified
git status --short
# 2. Verify actual changes exist
git diff --name-only
# 3. Verify specific file exists (if agent claimed to create it)
ls -la /path/to/file
# 4. Verify file content (spot check)
cat /path/to/file | head -20
If git status shows clean working tree → NOTHING was done, regardless of agent report.
| Agent Claim | Required Verification |
|---|---|
| "Successfully created X" | ls -la /path/to/X - prove file exists |
| "Modified files A, B, C" | git status - prove files show as modified |
| "Changes made to Y" | git diff Y - prove actual changes exist |
| "Implementation complete" | git diff --stat - prove work was done |
| "Added tests to Z" | cat Z - prove tests actually exist |
| "Updated configuration" | git diff config/ - prove config changed |
1. Delegate to agent
2. Agent reports completion
3. ⚠️ STOP - DO NOT TRUST REPORT ⚠️
4. Run verification commands (git status, ls, cat, etc.)
5. If verification fails → Agent did NOT complete work
6. If verification passes → THEN report to user WITH PROOF
❌ NEVER report to user:
✅ ALWAYS report with proof:
# After agent completes, verify:
$ git status --short
M apps/api/lib/users/worker.ex
A apps/api/test/users/worker_test.exs
# Prove files exist:
$ ls -la apps/api/test/users/worker_test.exs
-rw-r--r-- 1 user staff 2847 Nov 7 14:32 apps/api/test/users/worker_test.exs
# Spot check content:
$ head -10 apps/api/test/users/worker_test.exs
defmodule YourApp.Users.UserTest do
use YourApp.DataCase
...
Then report: "Agent completed. Verification proves 2 files modified (evidence above)."
The user must be able to trust your reports. Agent reports without verification are worthless.
This is non-negotiable. Failure to verify agent work is unacceptable.
Apply this skill whenever claiming:
❌ Never say without proof:
✅ Always provide actual output:
# Tests
$ mix test
Finished in 42.3 seconds
1,247 tests, 0 failures
# Linting
$ MIX_ENV=test mix lint
Running Credo... ✓ No issues found.
# Types
$ yarn ts:check
✓ 456 files checked, 0 errors
# CI Pipeline
$ glab ci status
Pipeline #12345: passed ✓
URL: https://gitlab.com/.../pipelines/12345
❌ WRONG:
✅ RIGHT:
mix test to verify..."❌ NEVER claim success from partial runs:
# Only ran 50 tests of 1,247
Running ExUnit tests...
50 tests, 0 failures
# STOPPED HERE - did not complete
"Tests pass" ❌ FALSE - only partial run
✅ ALWAYS run to completion:
# Full suite completed
Finished in 42.3 seconds
1,247 tests, 0 failures # ALL tests ran
"Full test suite passes: 1,247 tests, 0 failures" ✅ TRUE
$ MIX_ENV=test mix lint
✓ No issues found.
$ mix test
1,247 tests, 0 failures
$ yarn graphql:compat
✓ No breaking changes
$ glab ci status
Pipeline #12345: passed ✓
Then claim: "All verification passed (evidence above)."
Stop and provide proof if you catch yourself saying:
Implementation: Run verification → Show complete output → Report with evidence → Wait for approval
Code review: Reference line numbers (file.ts:123), quote issues,
show analyzer output
Debugging: Show full errors, stack traces, reproduction steps with output
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.