From spec-plugin
Validate a version's implementation against its Definition of Done. For code projects: runs automated tests against the live application. For non-code projects: reviews deliverables against acceptance criteria. Runs incrementally on re-runs. Ends with human validation guidance.
npx claudepluginhub nexaedge/nexaedge-marketplace --plugin spec-pluginThis skill uses the workspace's default tool permissions.
Your task: validate the version's output and guide the human through final verification.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Automates semantic versioning and release workflow for Claude Code plugins: bumps versions in package.json, marketplace.json, plugin.json; verifies builds; creates git tags, GitHub releases, changelogs.
Your task: validate the version's output and guide the human through final verification.
specs/ in CWD.specs/<version>.md — focus on Definition of Donespecs/<version>/architecture.mdspecs/<version>/stories.mdspecs/<version>/qa/specs/architecture.mdFor code projects: Each Definition of Done item that can be verified programmatically should have test cases. Keep it lean — ~10-15 test cases per version.
Write specs at specs/<version>/qa/NNN-spec-name.md:
# QA Spec NNN: Spec Title
**Area**: API | Integration | UI | Health | User Flow
**Prerequisites**: What must be running
## Setup
Steps to prepare the environment.
## Test Cases
### TC-001: Test case title
**Definition of Done item**: (which DoD item this covers)
**Steps**:
1. Concrete action (e.g., "POST /api/resource with body: {...}")
2. Verify result
**Expected**: What should happen
**Severity**: critical | major | minor
## Human Review Checklist
- [ ] Visual/UX items to verify manually
For non-code projects: Each Definition of Done item becomes a review criterion. Validation is done by reading and evaluating deliverables.
Write specs at specs/<version>/qa/NNN-spec-name.md:
# QA Spec NNN: Spec Title
**Area**: Completeness | Quality | Accuracy | Format
**Deliverables to review**: (list of files/documents)
## Review Criteria
### RC-001: Criterion title
**Definition of Done item**: (which DoD item this covers)
**What to check**:
1. Read [document/section]
2. Verify [specific quality or content requirement]
**Expected**: What a passing deliverable looks like
**Severity**: critical | major | minor
## Human Review Checklist
- [ ] Items requiring subjective judgment
Write index at specs/<version>/qa/specs.md.
Check for prior run results. If specs have ## Run Results, this is a re-run after fixes.
Incremental mode: Only re-run failed/skipped items. Add a ### Re-run: <date> subsection.
Verify ALL required services are up and reachable. If ANY service is down, STOP. Report via SendMessage.
Verify startup commands are documented. If missing, STOP and mark BLOCKED.
For each test case:
curl or httpie via BashFor each review criterion:
Append or update ## Run Results in each spec file:
## Run Results
### Run: <date>
| ID | Title | Result | Notes |
|----|-------|--------|-------|
| TC-001 | Test title | PASS | |
| TC-002 | Test title | FAIL | Expected X, got Y |
**Summary**: X passed, Y failed, Z skipped
**Failures requiring fixes**:
- TC-002: <clear description of what's wrong>
Report via SendMessage:
Present the human validation guide:
## Human Validation Guide
### What was delivered
- Summary of the version's outcomes
### How to review (code projects: how to run it)
- Exact steps to see/use the deliverables
### What to verify
Walk through each Definition of Done item that requires human judgment.
### Quick checks
- [ ] Key deliverables are complete and accessible
- [ ] Quality meets expectations
- [ ] [Project-specific checks]
### Known limitations in this version
- (from "Simplified in this version")
The version ships only when the human confirms.
Append to the spec file:
## Validation Findings
### Issues Found
- What didn't meet criteria
### Missing or Incomplete
- Gaps in deliverables
### Patterns Observed
- Recurring issues