From code-forge
Use before claiming work is done, fixed, or passing — requires running verification commands and confirming output before any success claim. Prevents false completion claims, unverified assertions, and "should work" statements.
npx claudepluginhub tercel/tercel-claude-plugins --plugin code-forgeThis skill uses the workspace's default tool permissions.
@../shared/execution-entrypoint.md
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
@../shared/execution-entrypoint.md
For this skill: start at the first executable step. If you catch yourself about to say "falling back to manual verification", STOP and go to the indicated step.
Evidence-based completion verification. Run before claiming any work is done.
Note: code-forge:impl runs verification automatically. This skill is for general use.
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE.
No exceptions. Not "it should work." Not "I just ran it." Not "the agent said it passed."
Every completion claim must pass through this gate — because false claims waste reviewer time and erode trust in automated workflows. A single unverified "tests pass" can mask a regression that reaches production.
IDENTIFY → RUN → READ → VERIFY → CLAIM
Example:
npm test → Output: 42 passed, 0 failedThese words in a completion claim are red flags — they mean you haven't verified:
Replace with evidence: "All 34 tests pass (output: 34 passed, 0 failed, exit code 0)."
If no automated verification exists, state that explicitly: "No automated test covers this — manual verification required: [steps]." This is honest, not hedging.
Run command → See "X passed, 0 failed" → Claim "all tests pass"
NOT: "Tests should pass now" or "I fixed the issue so tests will pass."
Write test → Run (PASS) → Revert fix → Run (MUST FAIL) → Restore fix → Run (PASS)
The revert-and-fail step proves the test actually catches the bug.
Run build → See exit code 0, no errors → Claim "build passes"
For each requirement:
[ ] Identified verification method
[ ] Ran verification
[ ] Evidence recorded
NOT: "Tests pass, so the feature is complete."
Agent claims success → Check VCS diff → Run tests yourself → Verify changes
NEVER trust agent reports without independent verification.