Verifies implementation completion by running tests, code hygiene review, spec compliance validation, and drift checks; blocks claims on failures. Use before commits or merges.
From spexnpx claudepluginhub rhuss/cc-spex --plugin spexThis skill uses the workspace's default tool permissions.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Guides slash command development for Claude Code: structure, YAML frontmatter, dynamic arguments, bash execution, user interactions, organization, and best practices.
Claiming work is complete without verification is dishonesty, not efficiency.
Core principle: Evidence before claims, always.
Verify implementation is complete by running tests AND validating spec compliance.
Key Additions from Standard Verification:
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
If you haven't run the verification command in this message, you cannot claim it passes.
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
- If NO: State actual status with evidence
- If YES: State claim WITH evidence
5. CHECK SPEC: Does implementation match spec?
- If NO: State actual compliance with evidence
- If YES: State compliance score WITH evidence
6. ONLY THEN: Make the claim
Skip any step = lying, not verifying
/speckit-implement (via superpowers trait overlay)If no spec is specified, discover available specs:
# List all specs in the project
fd -t f "spec.md" specs/ 2>/dev/null | head -20
If specs found: Present list and ask user to select one using AskUserQuestion.
Example:
Found 2 specs in this project:
1. specs/0001-user-auth/spec.md
2. specs/0002-api-gateway/spec.md
Which spec should I verify implementation against?
If no specs found: Inform user:
No specs found in specs/ directory.
Verification requires a spec to validate against.
Use `spex:brainstorm` or `/speckit-specify` to create one first.
Execute all tests:
# Run full test suite
npm test # or pytest, go test, etc.
Check results:
If tests fail:
Before checking spec compliance, review every changed file for mechanical defects. These are craft-level issues that no spec describes but that cause real bugs.
For each function you wrote or modified, check:
If any issue found: Fix before proceeding. These are not style nits; they are latent bugs.
Load spec:
cat specs/features/[feature-name].md
Check each requirement:
Functional Requirement 1: [From spec]
IMPLEMENTED / MISSING
TESTED / UNTESTED
MATCHES SPEC / DEVIATION
Functional Requirement 2: [From spec]
...
Verify:
Calculate compliance:
Spec Compliance: X/X requirements = XX%
If compliance < 100%:
spex:evolve to reconcileCompare:
Common drift sources:
If drift detected:
spex:evolve to reconcileFrom spec, check each criterion:
Success Criteria (from spec):
- [ ] Criterion 1: [Description]
Status: Met / Not met
Evidence: [How verified]
- [ ] Criterion 2: [Description]
...
All criteria must be met.
If any criterion not met:
Report structure:
# Verification Report: [Feature Name]
**Date:** YYYY-MM-DD
**Spec:** specs/features/[feature].md
## Test Results
**Status:** PASS / FAIL
[Test output]
**Summary:**
- Total: X tests
- Passed: X
- Failed: X
- Coverage: XX%
## Spec Compliance
**Status:** COMPLIANT / NON-COMPLIANT
**Compliance Score:** XX%
### Requirements Status
- Functional: X/X (XX%)
- Error Cases: X/X (XX%)
- Edge Cases: X/X (XX%)
- Non-Functional: X/X (XX%)
### Deviations
[List any deviations found]
## Spec Drift Check
**Status:** NO DRIFT / DRIFT DETECTED
[Details if drift found]
## Success Criteria
**Status:** ALL MET / INCOMPLETE
- [x] Criterion 1
- [x] Criterion 2
...
## Overall Status
**VERIFIED - Ready for completion**
OR
**NOT VERIFIED - Issues must be resolved**
**Blocking Issues:**
- [Issue 1]
- [Issue 2]
**Next Steps:**
[What needs to be done]
All conditions must be true:
If ALL true:
touch "${TMPDIR:-/tmp}/.claude-spex-verified-${SESSION_ID}"
(The SESSION_ID is available from the hook context. If not, use a stable session identifier.)If ANY false:
| Claim | Requires | Not Sufficient |
|---|---|---|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Spec compliant | Line-by-line requirement check | "Looks complete" |
| Linter clean | Linter output: 0 errors | Partial check |
| Build succeeds | Build command: exit 0 | Linter passing |
| Bug fixed | Test original symptom: passes | Code changed |
| Requirements met | Line-by-line checklist | Tests passing |
| Excuse | Reality |
|---|---|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence is not evidence |
| "Just this once" | No exceptions |
| "Tests pass" | Did you check spec compliance? |
| "Spec compliant" | Did you run the tests? |
| "I'm tired" | Exhaustion is not an excuse |
| "Partial check is enough" | Partial proves nothing |
Use TodoWrite to track:
You: Running verification before completion...
Step 1: Running tests...
$ npm test
All tests passing (24/24)
Coverage: 95%
Step 2: Validating spec compliance...
Reading spec: specs/features/user-profile-api.md
Checking requirements:
COMPLIANT: Functional Requirement 1: PUT endpoint - Implemented & tested
COMPLIANT: Functional Requirement 2: Validation - Implemented & tested
COMPLIANT: Functional Requirement 3: Auth - Implemented & tested
COMPLIANT: Functional Requirement 4: Authorization - Implemented & tested
COMPLIANT: Functional Requirement 5: Response format - Implemented & tested
COMPLIANT: Functional Requirement 6: Partial updates - Implemented & tested
COMPLIANT: Error Case 1: Missing JWT - Handled & tested
COMPLIANT: Error Case 2: User not found - Handled & tested
COMPLIANT: Error Case 3: Validation failure - Handled & tested
COMPLIANT: Error Case 4: Authorization failure - Handled & tested
COMPLIANT: Edge Case 1: Empty update - Handled & tested
COMPLIANT: Edge Case 2: Invalid avatar URL - Handled & tested
COMPLIANT: Edge Case 3: Special characters in name - Handled & tested
Spec Compliance: 13/13 = 100%
Step 3: Checking for spec drift...
Comparing spec to implementation...
No drift detected
Step 4: Verifying success criteria...
From spec:
MET: Authenticated users can update their profile
MET: Validation errors return 422 with clear messages
MET: Users cannot update other users' profiles
MET: All edge cases handled gracefully
All criteria met
=== VERIFICATION REPORT ===
Tests: PASS (24/24)
Spec Compliance: 100%
Spec Drift: None
Success Criteria: All met
**OVERALL: VERIFIED**
Implementation is complete and verified. Ready for commit/merge.
You: Running verification before completion...
Step 1: Running tests...
$ npm test
Tests FAILED (22/24 passing, 2 failing)
Failures:
1. "Should handle database connection errors"
Error: Connection retry not implemented
2. "Should return 503 when database unavailable"
Error: Returns 500 instead of 503
**BLOCKING: Tests must pass before proceeding**
Step 2: Validating spec compliance...
Reading spec: specs/features/user-profile-api.md
Checking requirements:
[checks requirements...]
DEVIATION: Non-Functional Requirement 1: Database retry logic
Status: Not implemented
Spec says: "3 retry attempts with exponential backoff"
Code does: Single attempt, no retry
Spec Compliance: 12/13 = 92%
Step 3: Checking success criteria...
From spec:
MET: Authenticated users can update their profile
MET: Validation errors return 422
NOT MET: Handles database failures gracefully (missing retry logic)
MET: Users cannot update other users' profiles
=== VERIFICATION REPORT ===
Tests: FAIL (2 failures)
Spec Compliance: 92% (missing retry logic)
Success Criteria: 3/4 met
**OVERALL: NOT VERIFIED**
**Blocking Issues:**
1. Database retry logic not implemented (from spec)
2. Tests failing for error handling
3. Returns wrong HTTP status (500 vs 503)
**Next Steps:**
1. Implement database retry logic per spec
2. Fix HTTP status code
3. Re-run verification
**DO NOT PROCEED until all issues resolved.**
Tests: 24/24 passing
Spec Compliance: 85%
BLOCKED
Issue: Tests don't cover all spec requirements
Action: Add tests for uncovered requirements, re-verify
Tests: 20/24 passing
Spec Compliance: 100%
BLOCKED
Issue: Implementation exists but has bugs
Action: Fix bugs, ensure tests pass, re-verify
Tests: 24/24 passing
Spec Compliance: 100%
Spec Drift: Spec updated after implementation
BLOCKED
Issue: Spec changed, code doesn't reflect changes
Action: Update code or revert spec, re-verify
This skill enforces quality gates:
No exceptions. No shortcuts.
These gates exist to prevent:
Verification is not optional.
Verification failures are information.
No shortcuts for verification.
Run the command. Read the output. Check the spec. THEN claim the result.
Fix issues, don't rationalize past them.
Evidence before assertions. Always.
This is non-negotiable.