npx claudepluginhub rune-kit/rune --plugin @rune/analyticsThis skill uses the workspace's default tool permissions.
Runs all automated checks to verify code health. Stateless — runs checks and reports results.
Enforces five-step verification (identify/run/read/verify/claim) of commands like npm test/build/lint before claiming code complete, fixed, or passing. Builds trust via evidence.
Enforces running verification commands and providing evidence before claiming coding task completion. Blocks unverified success reports.
Verifies code after changes with typecheck, lint, tests, build for Node/TS, Python, Go, Rust projects. Auto-fixes errors, detects secrets, circular deps, deadcode, AI slop.
Share bugs, ideas, or general feedback.
Runs all automated checks to verify code health. Stateless — runs checks and reports results.
Use Glob to find project config files:
package.json → Node.js/TypeScript projectpyproject.toml or setup.py → Python projectCargo.toml → Rust projectgo.mod → Go projectpom.xml or build.gradle → Java projectUse Read on the detected config file to find scripts or tool config (e.g., package.json scripts block for custom lint/test commands).
TodoWrite: [
{ content: "Detect project type", status: "in_progress" },
{ content: "Run lint check", status: "pending" },
{ content: "Run type check", status: "pending" },
{ content: "Run test suite", status: "pending" },
{ content: "Run build", status: "pending" },
{ content: "Generate verification report", status: "pending" }
]
Use Bash to run the appropriate linter. If package.json has a lint script, prefer that:
npm run lintnpx eslint . --max-warnings 0ruff check . (fallback: flake8 .)cargo clippy -- -D warningsgolangci-lint run (fallback: go vet ./...)If lint fails: record the failure output, mark lint as FAIL, continue to next step. Do NOT stop.
Verification gate: Command exits without crashing (even if it reports lint errors — those are FAIL, not errors).
Use Bash:
npx tsc --noEmitmypy . (fallback: pyright .)cargo checkgo vet ./...If type check fails: record error count and first 10 error lines, mark as FAIL, continue.
Use Bash to run the test suite. Prefer the project script if available:
npm testnpx vitest runnpx jest --passWithNoTestspytest -v (fallback: python -m unittest discover)cargo testgo test ./...Record: total tests, passed count, failed count, coverage percentage if output includes it.
If tests fail: record which tests failed (first 20), mark as FAIL, continue to build.
Use Bash:
package.json for build script → npm run build (fallback: npx tsc)pyproject.toml for [build-system] section:
python -m build --no-isolation 2>&1 | head -20 to verify packagingsetup.py exists (legacy): python setup.py check --strictpip install -e . --dry-run to catch broken entry points, missing __init__.py, or import path issuespyproject.toml and no setup.py (scripts-only project): SKIPcargo buildgo build ./...If build fails: record first 20 lines of build output, mark as FAIL.
Compile all results into the structured report. Update all TodoWrite items to completed.
Every file created or modified during implementation must pass ALL 3 levels:
Level 1 — EXISTS: File is on disk, non-empty.
Glob("path/to/expected/file") → found
Level 2 — SUBSTANTIVE: Contains real logic, NOT a stub. Scan for these stub patterns:
| Pattern | Language | Meaning |
|---|---|---|
Component returns only <div>Placeholder</div> or <div>TODO</div> | React/Vue | Stub component |
Route returns { message: "Not implemented" } or res.status(501) | API | Stub endpoint |
Function body is only return null / return {} / return [] / pass | Any | Stub function |
Class with all methods throwing NotImplementedError | Python/Java | Stub class |
useEffect with empty body / async function with no await | React/JS | Hollow implementation |
| File has only type/interface exports but no implementation | TypeScript | Stub types-only file |
// TODO or # TODO as the only content in a function | Any | Placeholder |
If ANY stub pattern detected → mark file as STUB, Level 2 FAIL.
Level 3 — WIRED: Actually imported/called/used by the rest of the system.
| File Type | Wiring Check |
|---|---|
| Component | Grep("<ComponentName") in parent files → ≥1 consumer |
| API route | `Grep("fetch\ |
| Hook | Grep("useHookName(") → ≥1 consumer |
| Utility function | Grep("import.*from.*this-file") → ≥1 importer |
| DB model/schema | `Grep("ModelName\ |
| CSS/style module | Grep("import.*from.*this-style") → ≥1 importer |
If file has 0 consumers → mark as UNWIRED, Level 3 FAIL.
Exception: Entry-point files (main.ts, index.ts, App.tsx, routes config) are exempt from Level 3 — they ARE the top-level consumers.
ALL new files must pass Level 1 + Level 2 + Level 3. EXISTS but STUB = "Existence Theater" — agent created files but didn't implement them. EXISTS and SUBSTANTIVE but UNWIRED = dead code — created but never connected. Report which level failed for each file in the Verification Report.Inspired by CLI-Anything (HKUDS/CLI-Anything, 14.5k★): "Never trust exit 0." Many tools exit 0 even when they fail silently. Always verify ACTUAL output.
After each phase command, verify that the expected artifact or indicator is present:
Test output — scan stdout for the pass/fail summary line:
X passed, X failed — if neither appears, output is incompleteX passed or X failed — exit 0 with no summary = runner crashed silentlyBuild output — after npm run build / cargo build / go build:
Glob("dist/**/*.js") or equivalentLint output — parse stdout for counts, not just exit code:
X problems (Y errors, Z warnings) — 0 problems = PASSerror keyword → log as suspicious, mark WARNGenerated files — check magic bytes for binary outputs:
%PDF — use Bash("head -c 4 file.pdf")PK (ZIP magic) — use Bash("head -c 2 file.zip")Type check — do not trust exit code alone:
tsc --noEmit: look for Found X errors or absence of error linesFound 0 errors = PASS; any other count = FAILtsc = PASS (no errors emitted) — note explicitlyruff not installed): note "tool not installed", mark check as SKIPNone — pure runner using Bash for all checks. Does not invoke other skills.
cook (L1): Phase 6 VERIFY — final check before commitfix (L2): validate fix doesn't break existing functionalitytest (L2): validate test coverage meets thresholddeploy (L2): post-deploy health checkssentinel (L2): run security audit tools (npm audit, etc.)safeguard (L2): verify safety net is solid before refactoringdb (L2): run migration in test environmentperf (L2): run benchmark scripts if configuredskill-forge (L2): verify newly created skill passes lint/type/build checksteam (L1): verify each parallel workstream before mergescaffold (L1): verify scaffolded project builds and passes initial testslaunch (L1): pre-deploy verification gatemcp-builder (L2): verify generated MCP server compiles and startspreflight (L2): run verification as part of pre-commit quality gatelogic-guardian (L2): verify logic invariants hold after changesdependency-doctor (L3): verify builds pass after dependency updatessast (L3): run verification alongside static analysisVERIFICATION REPORT
===================
Lint: [PASS/FAIL/SKIP] ([details])
Types: [PASS/FAIL/SKIP] ([X errors])
Tests: [PASS/FAIL/SKIP] ([passed]/[total], [coverage]%)
Build: [PASS/FAIL/SKIP]
### 3-Level File Verification
| File | L1 Exists | L2 Substantive | L3 Wired | Verdict |
|------|-----------|----------------|----------|---------|
| src/auth/login.ts | ✓ | ✓ | ✓ (imported by routes.ts) | PASS |
| src/auth/reset.ts | ✓ | STUB (returns null) | — | FAIL L2 |
| src/utils/format.ts | ✓ | ✓ | UNWIRED (0 importers) | FAIL L3 |
Overall: [PASS/FAIL]
### Failures (if any)
- Lint: [error details with file:line]
- Types: [first 5 type errors]
- Tests: [first 5 failing test names]
- Build: [first 5 build errors]
- Stubs: [files that failed Level 2 with stub pattern detected]
- Unwired: [files that failed Level 3 with 0 consumers]
From taste-skill (Leonxlnx/taste-skill, 3.4k★): Truncated code is worse than no code — it passes reviews but breaks at runtime.
When verifying code files (Level 2 SUBSTANTIVE check), also scan for truncation patterns — signs that the agent generated partial output and stopped:
| Banned Pattern | Language | What It Means |
|---|---|---|
// ... or /* ... */ as a statement | JS/TS | Agent truncated remaining code |
# ... as a statement (not comment) | Python | Agent truncated |
// rest of code / // remaining implementation | Any | Explicit truncation admission |
// TODO: implement as sole function body | Any | Placeholder, not implementation |
{ /* same as above */ } | JS/TS | Copy-paste truncation |
... (bare ellipsis, not spread operator) | JS/TS/Python | Truncation marker |
[PAUSED] / [CONTINUED] in source | Any | Agent session marker leaked into code |
Action on detection:
Continuation protocol — if the agent hit output limits mid-file:
[PAUSED — X of Y functions complete] in its response (NOT in the code file)When any skill calls verification and then reports results upstream:
| Claim | Without | Verdict |
|---|---|---|
| "All tests pass" | Test runner stdout showing pass count | REJECTED — re-run and show output |
| "No lint errors" | Linter stdout | REJECTED — re-run and show output |
| "Build succeeds" | Build command stdout | REJECTED — re-run and show output |
| "I verified it" | Verification Report | REJECTED — run verification skill properly |
| "Fixed and working" | Before/after test output | REJECTED — show the diff in results |
Known failure modes for this skill. Check these before declaring done.
| Failure Mode | Severity | Mitigation |
|---|---|---|
| Claiming "all passed" without showing actual command output | CRITICAL | Evidence-Before-Claims HARD-GATE blocks this — stdout/stderr is mandatory |
| Agent says "verified" without producing Verification Report | CRITICAL | No report = no verification. Re-run the skill properly. |
| Skipping build because "changes are small" | HIGH | Constraint 4: all four checks mandatory — size of changes doesn't matter |
| Marking check as PASS when the tool isn't installed | MEDIUM | Mark as SKIP (not PASS) — PASS means the tool ran and reported clean |
| Stopping after first failure instead of running remaining checks | MEDIUM | Run all checks; aggregate all failures so developer can fix everything at once |
| Reporting PASS when output has warnings but zero errors | LOW | PASS is correct but note warning count — caller decides if warnings matter |
| Trusting exit code 0 without output verification | CRITICAL | Artifact Verification HARD-GATE: always confirm success indicator in stdout (pass count, "0 errors", output file exists) |
| Existence Theater — file exists but is a stub | HIGH | 3-Level check: Level 2 scans for stub patterns (<div>Placeholder</div>, return null, NotImplementedError) |
| Dead code — file created but never imported/used | MEDIUM | 3-Level check: Level 3 greps for consumers. 0 importers = UNWIRED |
| Truncated code — agent hit output limit mid-file | HIGH | Output Completion Enforcement: scan for // ..., // rest of code, bare ellipsis patterns. TRUNCATED = Level 2 FAIL |
~$0.01-0.03 per run. Haiku + Bash commands. Fast and cheap.