From spec-forge
Review spec-forge generated documents (tech-design + feature specs) for quality, completeness, and internal consistency. Finds issues like incomplete sections, contradictions, missing traceability, and vague specs, then optionally auto-fixes them. Supports iterative review-fix cycles with a maximum of 2 iterations.
npx claudepluginhub tercel/tercel-claude-plugins --plugin spec-forgeThis skill uses the workspace's default tool permissions.
Systematically review spec-forge generated documents for quality, completeness, and internal consistency. Optionally auto-fix issues found.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Systematically review spec-forge generated documents for quality, completeness, and internal consistency. Optionally auto-fix issues found.
<!-- REVIEW: {question} --> comment instead of guessing| Severity | Meaning | Example |
|---|---|---|
| Critical | Wrong, contradictory, or misleading content | Feature spec API signature contradicts tech-design, component boundary mismatch |
| Major | Significant gap that would block or confuse implementation | Empty required section, missing error handling spec, undefined edge cases |
| Minor | Quality issue that degrades usefulness but isn't blocking | Vague description, missing cross-reference, inconsistent terminology |
Parse the arguments to determine what to review:
feature_name argument is provided, look for:
docs/{feature_name}/tech-design.mddocs/features/ (glob for docs/features/*.md)ideas/{feature_name}/draft.md (if exists)docs/project-{feature_name}.md (if exists, for multi-split context)docs/ for the most recent tech-design and all feature specs in docs/features/Use AskUserQuestion to ask:
Build the list of documents to review:
Review targets (will be reviewed):
docs/{feature_name}/tech-design.mddocs/features/overview.mddocs/features/{component-1}.md, docs/features/{component-2}.md, etc.Reference context (read for context, NOT reviewed):
ideas/{feature_name}/draft.md — upstream requirementsdocs/project-{feature_name}.md — project manifest with sub-feature scopeRead each review target document fully
Display the inventory:
Review scope: {feature_name}
Review targets: {N} documents
- docs/{feature_name}/tech-design.md
- docs/features/overview.md
- docs/features/{component-1}.md
- docs/features/{component-2}.md
...
Reference context: {N} documents
- ideas/{feature_name}/draft.md
Check each review target document against the following checklist:
{placeholder} template variables?docs/features/{name}.md), verify there is a corresponding row in the tech-design's §8.1 Component Overview with an identical slug. Raise a Critical finding if a feature spec exists with no matching component or vice versa — this breaks the traceability chain and confuses downstream consumers like code-forge.For each issue found, produce a structured finding:
- [{severity}] {file_path} § {section}: {description of issue} → FIX: {concrete fix instruction}
Compile the full review result:
REVIEW_RESULT: {PASS | ISSUES_FOUND}
CRITICAL_COUNT: {N}
MAJOR_COUNT: {N}
MINOR_COUNT: {N}
FINDINGS:
- [{severity}] {file_path} § {section}: {description} → FIX: {fix instruction}
...
Display summary:
spec-forge review: {feature_name}
Documents reviewed: {N}
Result: {PASS | ISSUES_FOUND}
Findings: {critical} critical, {major} major, {minor} minor
{If ISSUES_FOUND, list top findings}
If REVIEW_RESULT: PASS, inform the user and stop.
If REVIEW_RESULT: ISSUES_FOUND, proceed based on user's auto-fix preference from Step 1:
If auto-fix was not pre-selected, ask now via AskUserQuestion:
Maximum iterations: 2 (one fix + one re-review). If issues persist after 2 iterations, report remaining issues and stop.
For each finding to fix:
<!-- REVIEW: {question} --> comment instead of guessingAfter all fixes are applied, re-run the review (Step 3-4) on the same documents.
If REVIEW_RESULT: PASS: Display success
spec-forge review: PASS after fixes — {N} issues resolved
If REVIEW_RESULT: ISSUES_FOUND (iteration 2): Display remaining issues and stop
spec-forge review: {N} issues remain after auto-fix
Remaining issues:
- [{severity}] {file} § {section}: {description}
...
These may require manual attention or domain-specific decisions.
Display final status:
spec-forge review complete: {feature_name}
Documents reviewed: {N}
Issues found: {total}
Issues fixed: {fixed}
Issues remaining: {remaining}
Next steps:
/code-forge:plan @docs/features/{component-name}.md → Generate implementation plan
/spec-forge:review {feature_name} → Re-run review after manual fixes