From superpowers
Reviews design documents with parallel sub-agents to check requirements coverage, BDD completeness, consistency, and risks. Outputs traceability matrices, issue lists, and summaries scaled by project complexity.
npx claudepluginhub fradser/dotclaude --plugin superpowersbrainstorming/references/# Phase 4: Design Reflection - Detailed Guidance ## Goal Use parallel sub-agents to systematically review design documents and identify gaps before committing. ## Why Reflection Matters Design documents can have issues that impact implementation: - Requirements from Phase 1 that got lost in synthesis - Missing BDD scenarios for edge cases or error conditions - Inconsistencies between documents - Undocumented assumptions and risks Reflection catches these issues before implementation begins. ## Why Use Sub-Agents Sub-agents provide: - **Fresh perspective** - No bias from having writte...
/designGenerates structured phased workflows for product and engineering design documentation, with milestones, checklists, risks, and completion criteria.
/grillStress-tests a feature spec or plan by walking its decision tree, surfacing untested assumptions, unresolved dependencies, and terminology drift. Also supports --auto and --deep flags.
/speckit-analyzeAnalyzes spec.md, plan.md, tasks.md for inconsistencies, duplications, ambiguities, underspecification, and constitution compliance; outputs structured report with optional remediation plan.
/review-designReviews design documents using Product Manager, Architect, Designer, Security Design, and CTO agents. Produces approval verdict, agent feedback, blockers, suggestions, and next steps.
/planOrchestrates multi-agent workflow across analysis and design phases to produce requirements, constraints, NFRs, data model, API contracts, and quickstart guide.
/review-design-docInteractively reviews design doc sections on key decisions, risks/mitigations, and open questions; explains each, solicits feedback, and applies updates.
Share bugs, ideas, or general feedback.
Use parallel sub-agents to systematically review design documents and identify gaps before committing.
Design documents can have issues that impact implementation:
Reflection catches these issues before implementation begins.
Sub-agents provide:
Scale reflection based on the complexity assessment from Phase 1 Discovery.
No sub-agents. Main agent performs a single review pass: check requirements coverage, BDD completeness, and document consistency sequentially.
Launch two sub-agents in parallel using the Agent tool with subagent_type=general-purpose:
Sub-agent 1: Requirements & BDD Review — Combine the requirements traceability and BDD completeness checks into one agent. Verify every Phase 1 requirement is addressed AND check BDD scenarios cover happy path, edge cases, and error conditions.
Sub-agent 2: Consistency & Risk Review — Combine the cross-document consistency and risk checks into one agent. Verify terminology, references, component names, and identify key unaddressed risks.
Launch these three sub-agents in parallel using the Agent tool with subagent_type=general-purpose:
Sub-agent 1: Requirements Traceability Review
You are reviewing design documents for requirements coverage.
Context: [Provide Phase 1 requirements summary]
Your task:
1. Read all design documents in docs/plans/YYYY-MM-DD-<topic>-design/
2. Create a traceability matrix mapping each Phase 1 requirement to where it's addressed
3. Identify any orphaned requirements (not addressed anywhere)
4. Identify any implementation details without corresponding requirements
Output format:
- Requirements Traceability Matrix (requirement → document → section)
- Orphaned Requirements List (requirements not addressed)
- Scope Creep List (implementation without requirements)
Sub-agent 2: BDD Completeness Review
You are reviewing BDD specifications for completeness.
Context: [Provide feature summary]
Your task:
1. Read bdd-specs.md in docs/plans/YYYY-MM-DD-<topic>-design/
2. Categorize all scenarios as: happy path, error path, or edge case
3. Identify missing scenarios for each category
4. Check that each scenario has complete Given-When-Then structure
Output format:
- Scenario Coverage Summary (count by category)
- Missing Happy Path Scenarios
- Missing Error Path Scenarios (validation, auth, external failures, timeouts)
- Missing Edge Case Scenarios (boundaries, empty states, concurrency)
- Incomplete Scenarios (missing Gherkin structure)
Sub-agent 3: Cross-Document Consistency Review
You are reviewing design documents for consistency.
Your task:
1. Read all design documents in docs/plans/YYYY-MM-DD-<topic>-design/
2. Build a terminology glossary from all documents
3. Identify terminology inconsistencies (same concept, different terms)
4. Verify cross-references between documents work
5. Check component/file names are consistent across documents
Output format:
- Terminology Glossary (term → definition → source document)
- Inconsistencies Found (term variations, conflicting definitions)
- Broken Cross-References (links that don't resolve)
- Naming Inconsistencies (component/file name variations)
Security Review Sub-agent (for features with security implications):
You are reviewing design documents for security considerations.
Context: [Provide feature summary and threat context]
Your task:
1. Read all design documents in docs/plans/YYYY-MM-DD-<topic>-design/
2. Identify security-relevant components and data flows
3. Check for documented threat model and mitigations
4. Identify potential vulnerabilities not addressed
Output format:
- Security-Relevant Components
- Threat Model Coverage (what's addressed vs missing)
- Unaddressed Security Concerns
- Recommendations
Risk Assessment Sub-agent (for complex or high-stakes features):
You are reviewing design documents for risks and assumptions.
Context: [Provide feature summary and constraints]
Your task:
1. Read all design documents in docs/plans/YYYY-MM-DD-<topic>-design/
2. List all explicit assumptions found
3. Identify implicit assumptions (things assumed but not stated)
4. Identify technical, integration, and implementation risks
5. Check if risks have documented mitigations
Output format:
- Explicit Assumptions List
- Implicit Assumptions List (things taken for granted)
- Technical Risks (complexity, performance, scalability)
- Integration Risks (dependencies, APIs, data migration)
- Implementation Risks (new code, changes to existing code, infrastructure)
- Risks Without Mitigations
Use TaskOutput tool to retrieve results from all launched sub-agents.
Merge findings into a unified gap list:
| Category | Finding | Priority | Document to Update |
|---|---|---|---|
| Orphaned requirement | "X not addressed" | High | _index.md |
| Missing scenario | "Error case Y" | High | bdd-specs.md |
| Inconsistency | "Term Z varies" | Medium | All |
| Unaddressed risk | "Dependency W" | Medium | best-practices.md |
High Priority (must fix before commit):
Medium Priority (should fix):
Low Priority (nice to have):
Based on prioritized gap list:
For significant updates, consider launching a quick verification sub-agent:
You are verifying that specific gaps have been addressed.
Gaps that were identified:
[List the specific gaps that were fixed]
Your task:
1. Read the updated sections in docs/plans/YYYY-MM-DD-<topic>-design/
2. Verify each gap is now addressed
3. Report any gaps that remain
Output format:
- Verification Results (gap → addressed: yes/no)
- Remaining Issues