Post-planning quality validation - coverage matrix, red flag scanning, task quality enforcement, NFR validation, and REVIEWERS.md generation
From spexnpx claudepluginhub rhuss/cc-spex --plugin spexThis skill uses the workspace's default tool permissions.
This skill validates plan and task quality after /speckit.plan and /speckit.tasks have run. It checks coverage, scans for red flags, enforces task quality standards, and generates REVIEWERS.md.
{Skill: spec-kit}
Both plan.md and tasks.md MUST exist before running this skill. If either is missing, stop with an error:
SPEC_DIR="specs/[feature-name]"
[ -f "$SPEC_DIR/plan.md" ] && echo "plan.md found" || echo "ERROR: plan.md missing - run /speckit.plan first"
[ -f "$SPEC_DIR/tasks.md" ] && echo "tasks.md found" || echo "ERROR: tasks.md missing - run /speckit.tasks first"
If either file is missing, stop and instruct the user to generate the missing artifact.
Before detailed validation, check whether the plan attempts to cover multiple independent subsystems in a single document. Indicators:
If the plan covers multiple independent subsystems, flag it: "This plan may benefit from being split into separate plans, one per subsystem. Each plan should produce working, testable software on its own."
This is advisory, not blocking. Some plans legitimately span subsystems.
After tasks.md exists, verify every task meets these criteria:
Also check:
Verify the plan includes a file structure mapping:
If the plan lacks a file structure mapping, note it as a gap: tasks without a file map are harder to verify for completeness and overlap.
If tasks fail these checks, note the issues and suggest refinements.
Produce a coverage matrix mapping every spec requirement to its implementing tasks:
Requirement 1 → Tasks [X,Y] ✓
Requirement 2 → Tasks [Z] ✓
NFR 1 → Tasks [W] ✓
...
Flag any requirement without task coverage. All requirements must have at least one implementing task.
Also verify:
Search plan.md and tasks.md for vague or incomplete language:
SPEC_DIR="specs/[feature-name]"
rg -i "figure out|tbd|todo|implement later|somehow|somewhere|not sure|maybe|probably|add appropriate|add validation|handle edge cases|similar to task" "$SPEC_DIR/plan.md" "$SPEC_DIR/tasks.md" || echo "No red flags found"
Review any matches:
Check that types, method signatures, property names, and function names used across tasks are consistent:
clearLayers() in Task 3, it must not be called clearFullLayers() in Task 7Inconsistencies between tasks are plan bugs that will become code bugs during implementation.
For each non-functional requirement in the spec, verify the plan includes:
If any NFR lacks a measurement method, flag it.
Generating REVIEWERS.md is mandatory. The planning workflow MUST NOT proceed to PR creation without this file. After validation passes, generate specs/[feature-name]/REVIEWERS.md.
REVIEWERS.md is a reviewer's companion: a document that helps a human reviewer start and complete a meaningful spec review within 30 minutes. It is NOT a self-assessment, not a validation report, and not a dump of spec contents.
Write this document as if you are briefing a colleague who has 30 minutes and no prior context. Your job is to:
# Review Guide: [Feature Name]
**Spec:** [spec.md](spec.md) | **Plan:** [plan.md](plan.md) | **Tasks:** [tasks.md](tasks.md)
**Generated:** YYYY-MM-DD
---
## What This Spec Does
[2-4 sentences in plain language. What problem does this solve, and for whom?
A non-specialist should understand this paragraph.]
**In scope:** [Concise list of what this spec covers]
**Out of scope:** [What is explicitly excluded, and why. Be specific. These
boundaries are often where reviewers have the most useful feedback.]
## Bigger Picture
[How does this spec fit into the project's overall direction? What came before it,
what depends on it, what might follow? If this spec touches external technologies
or patterns, include relevant context from research.
Be honest: if the spec's relationship to adjacent work is unclear, say so.]
---
## Spec Review Guide (30 minutes)
> This guide helps you focus your 30 minutes on the parts of the spec and plan
> that need human judgment most. Each section points to specific locations and
> frames the review as questions.
### Understanding the approach (8 min)
Read [spec.md section X](spec.md#section-anchor) and
[section Y](spec.md#section-anchor) for the core approach. As you read, consider:
- [Question about whether the problem framing is right]
- [Question about whether the chosen approach fits the project context]
- [Question about an assumption the spec makes]
### Key decisions that need your eyes (12 min)
**[Decision 1 title]** ([spec.md section X.Y](spec.md#section-anchor))
[1-2 sentences on what was decided and what alternatives were considered.]
- Question for reviewer: [Specific question, e.g. "Is the performance trade-off
acceptable given our current load patterns?"]
**[Decision 2 title]** ([spec.md section X.Y](spec.md#section-anchor))
[Same pattern. Focus on decisions where alternatives were genuinely viable.]
[Repeat for 3-5 key decisions. Only include decisions where reviewer input
could change the outcome.]
### Areas where I'm less certain (5 min)
[Be honest about parts of the spec where the AI's interpretation may be wrong,
where requirements are ambiguous, or where the spec could reasonably go a
different direction. Link to specific sections.]
- [spec.md section X](spec.md#section-anchor): [What's unclear and why it matters]
- [plan.md phase N](plan.md#section-anchor): [What assumption might not hold]
### Risks and open questions (5 min)
[Frame risks as questions, not as a risk register. Link to specific sections.]
- [Risk framed as question, e.g. "If the external API changes its response
format ([FR-012](spec.md#fr-012)), is our fallback strategy sufficient?"]
- [Another risk-question with linked spec reference]
## Prior Review Feedback
> Include this section ONLY when the spec revision addresses feedback from a
> prior PR or review. Skip entirely for first-time specs.
[If prior review feedback exists, map each reviewer comment to how it was
addressed. Group by reviewer so each person can find their concerns. Never
silently omit a comment. Mark unaddressed items as Deferred, Disagreed, or
Out of scope with justification.]
| # | Reviewer | Original Concern | How Addressed | Spec Location |
|---|----------|-----------------|---------------|---------------|
| 1 | @reviewer | [Paraphrased concern] | [Resolution] | [section X.Y](spec.md#anchor) |
---
*Full context in linked [spec](spec.md) and [plan](plan.md).*
Note: The Code Review Guide section is appended later by spex:review-code after implementation completes. See the review-code skill for that template.
[section title](spec.md#anchor) format. The anchor is the lowercase, hyphenated heading (e.g., ## Functional Requirements becomes #functional-requirements). Never use bare backtick references like `spec.md` section 3 without a link.REVIEWERS.md MUST contain at least 3 of these 5 headings: What This Spec Does, Bigger Picture, Spec Review Guide, Areas where I'm less certain, Risks and open questions.
After writing the file, verify:
SPEC_DIR="specs/[feature-name]"
REQUIRED_HEADINGS="What This Spec Does|Bigger Picture|Spec Review Guide|Areas where|Risks and open"
HEADING_COUNT=$(grep -cE "^##[#]?\s+($REQUIRED_HEADINGS)" "$SPEC_DIR/REVIEWERS.md" 2>/dev/null || echo 0)
if [ "$HEADING_COUNT" -lt 3 ]; then
echo "ERROR: REVIEWERS.md has only $HEADING_COUNT of 5 expected sections. Regenerate with the template from step 5."
fi
If the check fails, the file was likely generated as a summary/self-review instead of a question-driven reviewer guide. Delete it and regenerate using the template above.
Report to the user:
After presenting results, collect ALL findings from steps 0-4 into a numbered list. Include both blocking and non-blocking issues. Present them as a consolidated findings summary:
Findings:
1. [BLOCKING] Task T003 is not actionable: "figure out auth approach"
2. [advisory] Plan may benefit from splitting (2 independent subsystems)
3. [gap] FR-007 has no implementing task in the coverage matrix
4. [red-flag] tasks.md line 42: "TBD" placeholder
5. [nfr] NFR-002 "response time < 200ms" has no measurement method
Then ask the user how to proceed:
Use AskUserQuestion with:
If "Fix all": Apply fixes to plan.md and/or tasks.md for each finding, then re-run the relevant checks to confirm resolution.
If "Let me pick": Use AskUserQuestion with multiSelect: true, listing up to 4 findings as options (if more than 4, batch them across multiple rounds). Each option's label is the short finding (e.g., "#1 Task T003 not actionable") and the description is the detail. The user can select which to fix and use "Other" to add comments or instructions for specific findings.
After the user selects findings, apply fixes to plan.md and/or tasks.md. For each selected finding:
After all selected fixes are applied, re-present any remaining unaddressed findings as informational (no further prompting).
If "Skip": Proceed without changes. Note that blocking issues remain unresolved.
This skill is invoked by:
/speckit.plan (after task generation)/spex:review-planThis skill invokes:
{Skill: spec-kit} for initialization