Review specifications for completeness, clarity, testability, and quality. Use with --mode quick/standard/thorough to control review depth. Default is thorough (3 consecutive passes). (user)
/plugin marketplace add teliha/dev-workflows/plugin install dev-workflows@dev-workflowsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
README.mdThis skill automatically activates when:
specs/ directoryEnsure specifications are complete, clear, and implementable BEFORE development begins. Catching issues in specs is much cheaper than fixing them in code.
Use --mode or -m to control review depth:
| Mode | Passes | Categories | Use Case |
|---|---|---|---|
| quick | 1 | COMPLETENESS, CLARITY only | Fast sanity check |
| standard | 2 consecutive | All 5 categories | Balanced review |
| thorough | 3 consecutive | All 5 categories | Full review (default) |
spec review --mode quick specs/feature/spec.md
spec review -m standard specs/feature/spec.md
spec review specs/feature/spec.md # defaults to thorough
Quick Mode (1 pass):
Standard Mode (2 consecutive passes):
Thorough Mode (3 consecutive passes) - DEFAULT:
Problem: If you (Claude) helped write the specification, reviewing it in the same conversation introduces bias. You may unconsciously:
Solution: Always perform reviews with a FRESH context using a subagent.
MANDATORY: When reviewing a specification, spawn a NEW agent with NO prior context:
Use the Task tool with subagent_type="general-purpose" and a prompt that:
1. Contains ONLY the spec file path to review
2. Does NOT include conversation history
3. Asks for objective review against the checklist
The reviewer should have NO memory of writing the spec.
Find the specification to review:
Launch 5+ parallel subagents using Task tool (subagent_type: general-purpose) for context-isolated review:
Category 1: COMPLETENESS
- Purpose/Overview defined
- Scope (included/excluded) defined
- Functional requirements listed
- Non-functional requirements listed
- Data requirements (inputs, outputs, formats)
- Error handling specified
- Edge cases documented
- Dependencies listed
- Constraints documented
- Acceptance criteria defined
Category 2: CLARITY
- No ambiguous terms ("fast", "many", "appropriate")
- Specific values (numbers, limits, thresholds)
- Clear ownership (who/what is responsible)
- Defined terms (technical terms explained)
- No hidden assumptions
Category 3: TESTABILITY
- Measurable criteria
- Clear pass/fail conditions
- Expected outputs for given inputs
- Error conditions specified
- Edge case tests defined
Category 4: EDGE CASES & CONSISTENCY
- Zero/empty values handled
- Maximum values handled
- Boundary conditions specified
- Null/undefined cases covered
- Internal consistency (no contradictions)
- Consistent terminology
Category 5: TECHNICAL & SECURITY
- Technically feasible
- Resource requirements realistic
- Performance targets achievable
- Dependencies available
- Authentication requirements documented
- Authorization checks specified
- Data protection defined
- Input validation specified
Each subagent returns:
{
"category": "...",
"issues": [
{"severity": "error|warning", "location": "file:section", "description": "..."}
],
"passed": true/false
}
IMPORTANT: Fix ALL issues including warnings, not just errors.
# Passes required based on mode:
# - quick: 1 pass (no consecutive requirement)
# - standard: 2 consecutive passes
# - thorough: 3 consecutive passes (default)
consecutive_passes = 0
iteration = 0
REQUIRED_PASSES = mode_to_passes(mode) # 1, 2, or 3
WHILE consecutive_passes < REQUIRED_PASSES:
iteration++
1. Aggregate results from all subagents
2. If issues found (errors OR warnings):
- consecutive_passes = 0 # Reset counter
- Fix ALL errors first
- Then fix ALL warnings
3. If no issues (PASS):
- consecutive_passes++
4. Re-run parallel review with fresh subagents
Termination Conditions by Mode:
Output structured summary:
## Spec Review Results: <spec-name>
| Category | Status | Issues |
|----------|--------|--------|
| Completeness | PASSED/FAILED | N errors, M warnings |
| Clarity | PASSED/FAILED | N errors, M warnings |
| Testability | PASSED/FAILED | N errors, M warnings |
| Edge Cases & Consistency | PASSED/FAILED | N errors, M warnings |
| Technical & Security | PASSED/FAILED | N errors, M warnings |
### Iterations Summary
| Iteration | Issues Found | Consecutive Passes |
|-----------|--------------|-------------------|
| 1 | 5 errors, 3 warnings | 0 |
| 2 | 0 | 1 |
| 3 | 0 | 2 |
| 4 | 0 | 3 |
### Fixed Issues (per iteration)
- Iteration 1: [list of fixes]
### Final Status: PASSED (3 consecutive)
Review stability confirmed with 3 consecutive passes.
| Level | Meaning | Action |
|---|---|---|
| error | Blocks implementation | Must fix before proceeding |
| warning | Could cause problems | Should fix (also auto-fixed in loop) |
When an issue is found after a previous PASS iteration, it indicates a review oversight. Record these in CLAUDE.md at the directory level where the issue exists.
Place overlooked issues in CLAUDE.md files at the directory containing the problematic file.
Example: If issue is in specs/feature/spec.md
specs/feature/CLAUDE.mdWhen oversight occurs (PASS then FAIL):
CLAUDE.md in that directory# Overlooked Issues for This Directory
## [Issue Pattern Name]
**Category**: COMPLETENESS/CLARITY/TESTABILITY/EDGE_CASES/TECHNICAL/SECURITY
**File**: filename.md
**Missed in iteration**: N
**Found in iteration**: M
**Description**: What was missed
**Check instruction**: Specific verification step
---
When reviewing, subagents should:
| Ambiguous | Clear |
|---|---|
| "The system should respond quickly" | "Response time must be < 200ms" |
| "Handle large files" | "Support files up to 100MB" |
| "Users can access appropriate data" | "Users can access data they own" |
| "Should be secure" | "Must use TLS 1.3, authenticate all requests" |
| "As needed" | "When X condition is met" |
Based on common issues, specs should:
Use RFC 2119 keywords
Quantify everything
Include examples
Document decisions
Consider the implementer
This skill works well with:
check-spec-contradictions - After individual review, check against other specsimplementation-review - After implementation, verify against reviewed specWrite Spec -> Spec Review (this skill) -> Fix Issues ->
-> Check Contradictions -> [USER DECISION] -> Implement -> Implementation Review
After spec is approved, DO NOT automatically start implementation.
When the spec review passes (SPEC APPROVED):
Rationale: The user should maintain control over when implementation begins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.