Verify implementation against plan spec, run all automated verification, identify deviations
Verify implementation against plan spec, run all automated verification, identify deviations
/plugin marketplace add astrosteveo/claude-code-plugins/plugin install superharness@astrosteveo-plugins<path to plan file>You are tasked with validating that an implementation plan was correctly executed, verifying all success criteria, and identifying any deviations or issues.
When invoked:
Determine context:
Locate the plan:
.harness/*/plan.md or ask userGather implementation evidence:
# Check recent commits
git log --oneline -n 20
git diff HEAD~N..HEAD # Where N covers implementation commits
# Run comprehensive checks
make check test # or equivalent
If starting fresh or need more context:
Read the implementation plan completely
Identify what should have changed:
Spawn parallel research to discover implementation:
Task 1 - Verify database changes:
Research if migrations were added and schema matches plan.
Return: What was implemented vs what plan specified
Task 2 - Verify code changes:
Find all modified files related to feature.
Compare actual changes to plan specifications.
Return: File-by-file comparison
Task 3 - Verify test coverage:
Check if tests were added/modified as specified.
Run test commands and capture results.
Return: Test status and any missing coverage
For each phase in the plan:
Check completion status:
- [x] in the planphase(N): complete trailersRun automated verification:
Assess manual criteria:
Think deeply about edge cases:
Create comprehensive validation summary:
## Validation Report: [Plan Name]
### Implementation Status
[x] Phase 1: [Name] - Fully implemented (phase(1): complete in git)
[x] Phase 2: [Name] - Fully implemented (phase(2): complete in git)
[!] Phase 3: [Name] - Partially implemented (see issues)
### Automated Verification Results
[x] Build passes: `make build` - exit code 0
[x] Tests pass: `make test` - 42/42 tests pass
[!] Linting issues: `make lint` - 3 warnings (see below)
### Evidence
$ make test Running test suite... 42 tests passed, 0 failed Exit code: 0
$ make lint src/auth.ts:45 - warning: unused variable ...
### Code Review Findings
#### Matches Plan:
- Database migration correctly adds [table]
- API endpoints implement specified methods
- Error handling follows plan
#### Deviations from Plan:
- Used different variable names in [file:line]
- Added extra validation in [file:line] (improvement)
#### Potential Issues:
- Missing index on foreign key could impact performance
- No rollback handling in migration
### Manual Testing Required:
1. UI functionality:
- [ ] Verify [feature] appears correctly
- [ ] Test error states with invalid input
2. Integration:
- [ ] Confirm works with existing [component]
- [ ] Check performance with large datasets
### Recommendations:
- Address linting warnings before merge
- Consider adding integration test for [scenario]
- Document new API endpoints
Validation Complete
Overall Status: [PASS/PARTIAL/FAIL]
Summary:
- Phases completed: X/Y (verified via git trailers)
- Automated checks: X passed, Y failed
- Deviations found: X
Evidence:
[Show actual command output]
Key Issues:
1. [Issue 1]
2. [Issue 2]
Manual Testing Still Required:
- [ ] [Step 1]
- [ ] [Step 2]
Full report generated. Ready to discuss findings or proceed to handoff.
If you were part of the implementation:
Always verify:
phase(N): complete trailers/superharness:implement - Execute the implementation/superharness:validate - Verify correctness (this command)/superharness:handoff - Generate handoff document