From spec-review
Reviews and enriches story specifications with codebase-verified technical sub-tasks, architecture alignment checks, design simplification suggestions, and API test plans. Dynamically discovers project architecture at runtime. Use when: (1) a new story spec needs review before implementation, (2) a spec has high-level tasks but lacks implementation-ready detail, (3) need to verify spec assumptions against actual codebase, (4) a spec references API changes but has no test plan, (5) reviewing specs that reference data shapes or pipeline ordering, (6) spec subtasks mention add field X to object Y or call function at line N.
npx claudepluginhub abhattacherjee/claude-code-skills --plugin spec-reviewThis skill uses the workspace's default tool permissions.
Comprehensive spec review combining codebase verification and implementation planning.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Comprehensive spec review combining codebase verification and implementation planning. Discovers project architecture at runtime — works with any codebase structure.
Story specs with technical subtasks frequently contain incorrect assumptions about the codebase — fields on the wrong object, wrong data shapes, incorrect pipeline ordering, unnecessary wrapper functions. These look plausible but cause confusion during implementation.
For every spec subtask that references code, verify these 8 categories:
# Don't assume a field is on the object you expect
grep -rn "fieldName" <service-directory> --include="*.ts" --include="*.js"
Common trap: Enrichment/transform services copy SOME fields from source items but not ALL.
grep -A5 "fieldName" <service-file>
# May reveal { value: number, unit: string } not just a number
Common trap: Duration/time fields are often objects {value, unit} not plain numbers.
# Trace the processing pipeline to verify what data exists at each stage
grep -n "extract\|enrich\|transform\|validate\|process" <orchestrator-file>
Common trap: Integration points may be BEFORE enrichment/transform runs.
grep -A3 "function targetFunction\|const targetFunction" <file>
Common trap: Functions may infer values from their inputs rather than accepting explicit params.
grep -B2 -A2 "fieldName =" <config-or-service-file>
# May reveal: field only set under certain conditions
grep -n "buildDynamic\|delegate\|dispatch\|forward" <relevant-files>
# Find pipeline agents/scripts that run build/enrichment steps
grep -rn "script-name\|task-name" .claude/agents/ scripts/ .github/
# When spec says "function A returns IDs that function B uses for lookup"
# Verify both use the same ID resolution/normalization
grep -A10 "function producer\|function consumer" <service-files>
Common trap: A producer resolves IDs to canonical form but the consumer expects raw values.
After applying the checklist, the spec should have:
# Discover project architecture (layers, services, test tools)
~/.claude/skills/spec-review/scripts/discover-project-architecture.sh "$(git rev-parse --show-toplevel)"
~/.claude/skills/spec-review/scripts/discover-project-architecture.sh "$(git rev-parse --show-toplevel)" --json
# Extract spec sections for analysis
~/.claude/skills/spec-review/scripts/extract-spec-sections.sh <spec-file>
~/.claude/skills/spec-review/scripts/extract-spec-sections.sh <spec-file> --json
# Task checklist for full review
~/.claude/skills/spec-review/scripts/task-manifest.sh full-review
Before starting a full review, create the task checklist from scripts/task-manifest.sh full-review:
| # | subject | activeForm |
|---|---|---|
| 1 | Discover architecture and extract spec | Discovering project architecture |
| 2 | Launch 4 parallel analysis agents | Analyzing spec with 4 parallel agents |
| 3 | Synthesize findings into enrichments | Synthesizing review findings |
| 4 | Calculate readiness score | Calculating implementation readiness |
| 5 | Present report and apply enrichments | Presenting review report |
Update rules:
in_progress (TaskUpdate) immediately before starting itcompleted immediately after it succeedscompleted after ALL agents returnin_progress and report the errordeleted1.1 Discover project architecture:
PROJECT_ROOT="$(git rev-parse --show-toplevel)"
ARCH=$(~/.claude/skills/spec-review/scripts/discover-project-architecture.sh "$PROJECT_ROOT" --json)
This returns: packages with layer classification (frontend/backend/mcp/tooling), frameworks per package, test frameworks, API test tools, Bruno folders (if any), E2E framework, data flow patterns, i18n approach, and security patterns.
1.2 Extract spec sections:
SPEC_DATA=$(~/.claude/skills/spec-review/scripts/extract-spec-sections.sh "$SPEC_FILE" --json)
This returns: title, acceptance criteria counts, referenced files and endpoints, sub-task count, and gap detection (missing codebase state, missing test plan).
1.3 Read CLAUDE.md and architecture docs from the project root. Look for:
CLAUDE.md — project-level development rules and conventionsdocs/development/ARCHITECTURE.md or similar — architecture documentationdocs/development/TESTING.md or similar — test strategy documentationThese provide project-specific context that the discovery script can't capture (business rules, data flow conventions, layer responsibility definitions).
Launch ALL four agents in a SINGLE Task tool message.
| Agent | Type | Purpose | Key Output |
|---|---|---|---|
| Codebase Verifier | feature-dev:code-explorer | Verify every file path, function name, and data shape | Verified/corrected paths, signatures, shapes |
| Architecture Reviewer | feature-dev:code-explorer | Map spec to discovered architecture, identify boundary violations | Architecture alignment report |
| Design Simplifier | feature-dev:code-architect | Analyze for over-engineering, suggest simpler approaches | Simplification recommendations |
| Test Plan Extractor | general-purpose | Extract testable scenarios, design API/E2E test plan | Test plan using project's discovered test tools |
Codebase Verifier — Provide the spec's referenced files/functions/fields and ask:
You are verifying a story specification against the actual codebase.
SPEC TITLE: [title]
SPEC FILE: [path]
FILES REFERENCED IN SPEC:
[list from extraction]
FUNCTIONS REFERENCED:
[list of function names and claimed signatures]
YOUR TASK:
1. Verify each referenced file EXISTS at the claimed path
2. Verify each function has the claimed SIGNATURE
3. Verify each data shape matches reality (field names, types, nesting)
4. Check line number references are still accurate
5. Report DISCREPANCIES between spec claims and actual codebase
OUTPUT FORMAT:
- VERIFIED: [path/function] — matches spec
- CORRECTED: [path/function] — spec says X, actual is Y
- MISSING: [path/function] — does not exist in codebase
Architecture Reviewer — Inject discovered architecture into the prompt:
You are reviewing a story specification for architecture alignment.
SPEC TITLE: [title]
SOLUTION OVERVIEW: [paste solution section]
SUB-TASKS: [paste sub-task list]
PROJECT ARCHITECTURE (discovered at runtime):
[Paste the ARCH JSON from Phase 1 — layers, services, frameworks]
PROJECT RULES (from CLAUDE.md):
[Paste relevant architecture rules from CLAUDE.md]
YOUR TASK:
1. Identify the project's layer boundaries from the discovered architecture
2. Check each sub-task respects layer boundaries
3. Map spec requirements to EXISTING services that already handle similar work
4. Flag missing cross-cutting concerns (CSRF, caching, i18n — based on discovered patterns)
5. Check if data flow follows the project's established direction
OUTPUT FORMAT:
- ALIGNED: [sub-task] — correctly uses [layer/service]
- VIOLATION: [sub-task] — [description of violation and fix]
- REUSE: [sub-task] — existing [service/function] already handles this
- MISSING: [concern] — spec should address [security/cache/i18n/etc.]
Design Simplifier — Provide the spec's technical design:
You are reviewing a story specification for over-engineering.
SPEC TITLE: [title]
TECHNICAL DESIGN: [paste design section]
SUB-TASKS: [paste all sub-tasks with details]
SUB-TASK COUNT: [N]
NEW FILES PROPOSED: [list]
SIMPLIFICATION PATTERNS:
[Include the 15 patterns from references/design-simplification-checklist.md]
YOUR TASK:
1. For each sub-task, ask: "Can this be done more simply?"
2. Check for unnecessary abstractions (wrappers, managers, helpers used once)
3. Check for redundant infrastructure (new cache when upstream caches exist)
4. Check for over-scoped changes (refactoring, observability, feature flags)
5. Propose concrete simplifications with rationale
OUTPUT FORMAT:
- KEEP: [sub-task] — appropriately scoped
- SIMPLIFY: [sub-task] — [current approach] → [simpler approach] because [reason]
- REMOVE: [sub-task] — unnecessary because [reason]
- MERGE: [sub-tasks X+Y] → single sub-task because [reason]
Test Plan Extractor — Use discovered test tools:
You are extracting testable use cases from a story specification.
SPEC TITLE: [title]
ACCEPTANCE CRITERIA: [paste all ACs]
ENDPOINTS AFFECTED: [list of API routes/tools]
PROJECT TEST INFRASTRUCTURE (discovered):
- API Test Tool: [e.g., Bruno, Postman, HTTP Client, or none]
- API Test Folders: [list from discovery, or "none detected"]
- Unit Test Framework: [e.g., Jest, Vitest, pytest per package]
- E2E Framework: [e.g., Playwright, Cypress, or none]
SCENARIO CATEGORIES:
HP (Happy Path, P0) | VAL (Validation, P0) | EDGE (Edge Cases, P1)
DC (Data Contract, P0) | REG (Regression, P1) | LOC (Localization, P1) | PERF (Performance, P2)
YOUR TASK:
1. For each acceptance criterion, identify testable API scenarios
2. Classify each scenario (HP/VAL/EDGE/DC/REG/LOC/PERF)
3. Determine the correct test location based on discovered folder structure
4. Identify shared fixtures/sessions that can be reused
5. Define key assertions for each test
6. Check for existing tests that might already cover scenarios
7. Note scenarios better suited for unit tests or E2E
OUTPUT FORMAT:
## API Test Plan
### Test Summary
| ID | Scenario | Category | File | Folder | Priority |
### Coverage Matrix
| Acceptance Criterion | Test(s) | Gap? |
### Test Specifications
1. **TEST.1** Create [folder/file] — [description with key assertions]
When CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS is enabled, Phase 2 can use a team instead
of 4 parallel Agent calls. Benefits:
Team workflow:
TeamCreate("spec-review-{spec-name}")TeamDelete cleanupDefault: Sub-agent mode (no teams). Teams are opt-in when enabled.
After all agents return, synthesize findings into spec enrichments.
## Current Codebase State
**What EXISTS (verified):**
- `exact/path/to/file.ext` — functionName(params): ReturnType (line XX)
**What needs to be MODIFIED:**
- `exact/path/to/file.ext` — [specific change description]
**What needs to be CREATED:**
- `exact/path/to/new-file.ext` — [purpose and responsibility]
## Design Simplification Notes
1. **[Sub-task X]**: [Current approach] → [Simpler approach] — [rationale]
Each sub-task must have: File, Function, Change, Verification command, Dependencies, Estimated complexity (S/M/L).
Formatted using the project's discovered test tool conventions (Bruno .bru files,
Postman collections, plain HTTP files, or unit test specifications).
| Dimension | Score (1-5) | Notes |
|---|---|---|
| Codebase Accuracy | X | How many refs were correct vs corrected |
| Architecture Alignment | X | How many violations found |
| Design Simplicity | X | Complexity score from simplifier |
| Test Coverage | X | % of ACs mapped to tests |
| Sub-Task Completeness | X | Are all sub-tasks implementation-ready? |
| Overall Readiness | X/25 |
Readiness levels:
Present findings as a structured report with:
Ask the user whether to:
specs/reviews/review-X.Y.md)grep/read — never trust assumed file paths.spec-creator — generates story specs (this skill reviews them)context-shield — use when a spec references many external docs that need reading