From srnnkls-tropos
Multi-agent spec review with parallel Claude/OpenCode reviewers. Use after spec-create or standalone via /spec.review.
npx claudepluginhub joshuarweaver/cascade-code-general-misc-2 --plugin srnnkls-troposThis skill uses the workspace's default tool permissions.
Multi-perspective spec review using parallel subagent dispatch for comprehensive validation.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Multi-perspective spec review using parallel subagent dispatch for comprehensive validation.
Reference: See reference/roles/ for reviewer personas, reference/report.md for YAML schemas, reference/playbook.md for edge case handling.
spec-create to validate before implementationtask-dispatch for Initiatives (catches gate issues early)/spec.review auth-system)./specs/draft/./specs/active/spec.md, context.md, tasks.yaml, validation.yamlQuestion 1: Select reviewers:
Header: Reviewers
Question: Which reviewers should analyze this spec?
multiSelect: true
Options:
- claude-opus: Claude Opus - native subagent, comprehensive, context-aware
- claude-sonnet: Claude Sonnet - faster native review
- openai-gpt5.2: OpenAI GPT-5.2 - base model
- openai-gpt5.2-codex: OpenAI GPT-5.2 Codex - code-specialized
- openai-gpt5.2-pro: OpenAI GPT-5.2 Pro - extended capabilities
- gemini-3-flash: Google Gemini 3 Flash - fast, efficient
- gemini-3-pro: Google Gemini 3 Pro - advanced reasoning
Default selection: claude-opus, openai-gpt5.2-pro, gemini-3-pro
Question 2: Select reasoning effort (if OpenCode reviewers selected):
Header: Reasoning
Question: What reasoning effort level for OpenCode reviewers?
multiSelect: false
Options:
- low: Quick responses, minimal deliberation
- medium: Balanced reasoning (Recommended)
- high: Deep analysis, thorough deliberation
- xhigh: Maximum reasoning (GPT-5.2 only)
Default: medium
Model mapping to commands:
claude-opus → Task tool with model: "opus"claude-sonnet → Task tool with model: "sonnet"openai-gpt5.2 → opencode run --model "openai/gpt-5.2" --variant {reasoning}-mediumopenai-gpt5.2-codex → opencode run --model "openai/gpt-5.2-codex" --variant {reasoning}-mediumopenai-gpt5.2-pro → opencode run --model "openai/gpt-5.2" --variant {reasoning}-mediumgemini-3-flash → opencode run --model "google/gemini-3-flash-preview" --variant {reasoning}-mediumgemini-3-pro → opencode run --model "google/gemini-3-pro-preview" --variant {reasoning}-mediumCRITICAL: Dispatch all selected reviewers in the same message for true parallelism.
Review Prompt Template:
You are reviewing a spec for completeness and feasibility.
## Spec Documents
[Include spec.md, context.md, tasks.yaml content]
## Review Focus
Evaluate against these gates:
1. **Completeness** - Are all requirements specified? Missing behaviors?
2. **Consistency** - Do documents contradict each other? Ambiguous terms?
3. **Feasibility** - Can tasks be implemented as described? Missing dependencies?
4. **Clarity** - Would a fresh developer understand what to build?
## Output Format
Return a YAML report:
```yaml
reviewer_report:
reviewer: {REVIEWER_ID}
gates:
completeness:
status: pass | fail
issues: []
consistency:
status: pass | fail
issues: []
feasibility:
status: pass | fail
issues: []
clarity:
status: pass | fail
issues: []
issues:
- severity: critical | high | medium
gate: completeness
area: ${TAXONOMY_AREA}
description: "Clear description"
suggestion: "How to fix"
clarifying_questions:
- area: ${TAXONOMY_AREA}
question: "What needs clarification?"
strengths:
- "Positive observation"
**Dispatch by Type:**
**Claude reviewers (Task tool):**
```python
Task(
subagent_type="general-purpose",
model="opus", # or "sonnet"
prompt=review_prompt
)
OpenCode reviewers (Bash tool, background):
timeout 1200 opencode run --model "{MODEL_PATH}" --variant {reasoning}-medium "{review_prompt}"
Examples (with high reasoning):
opencode run --model "openai/gpt-5.2" --variant high-medium "{prompt}"opencode run --model "google/gemini-3-pro-preview" --variant high-medium "{prompt}"opencode run --model "openai/gpt-5.2-codex" --variant high-medium "{prompt}"After all reviewers complete:
Gate Summary Table:
| Gate | Status | Claude | GPT-5.2 Pro | Gemini-3 Pro |
|--------------|--------|--------|-------------|--------------|
| Completeness | FAIL | fail | pass | fail |
| Consistency | PASS | pass | pass | pass |
| Feasibility | FAIL | fail | fail | pass |
| Clarity | PASS | pass | pass | pass |
Issues by Severity:
## Critical (must fix before implementation)
- [C1] Missing error handling for auth timeout (Completeness)
Found by: claude-opus, opencode-gemini3-pro
Suggestion: Add error case to spec.md#edge-cases
## High (should fix)
- [H1] Task T003 depends on undefined API contract (Feasibility)
Found by: claude-opus, opencode-gpt5.2-pro
Suggestion: Define API in context.md or defer task
## Medium (consider)
- [M1] Term "session" used inconsistently (Consistency)
Found by: opencode-gpt5.2-pro
Suggestion: Add to terminology section
Use AskUserQuestion with questions grouped by taxonomy area:
Header: Scope
Question: The spec mentions "user roles" but doesn't define them. What roles exist?
multiSelect: false
Options:
- Admin/User: Two-tier system
- Role-based: Granular permissions
- Defer: Address later
Record answers for validation.yaml update.
Add clarification session to validation.yaml:
clarification_sessions:
- id: S00${N}
timestamp: ${ISO_TIMESTAMP}
source: spec-review
reviewers: [claude-opus, opencode-gpt5.2]
questions:
- id: Q001
question: "${QUESTION}"
answer: "${ANSWER}"
area: ${TAXONOMY_AREA}
doc_updates:
- file: spec.md
section: ${SECTION}
action: modified
Update markers section:
status: resolved)status: open)All gates pass:
Review complete. All gates passed.
Recommendation: Ready for /spec.promote or /implement
Issues found:
Review complete. 2 gates failed.
Recommendation:
1. Address critical/high issues
2. Re-run /spec.review
| Gate | What It Checks |
|---|---|
| Completeness | All requirements specified, no missing behaviors |
| Consistency | Documents align, no contradictions, terms used consistently |
| Feasibility | Tasks implementable, dependencies available, no blockers |
| Clarity | Unambiguous, fresh developer can understand scope |
OpenCode timeout (> 20 minutes):
One reviewer fails:
No reviewers selected:
Spec not found:
./specs/draft/ and ./specs/active/OpenCode command syntax:
opencode run --model "openai/gpt-5.2" --variant {reasoning}-medium {query}opencode run --model "google/gemini-3-pro-preview" --variant {reasoning}-medium {query}Command: /spec.review [spec-name]
Related skills:
spec-create - Creates specs to reviewclarify - Resolves markers found during reviewtask-dispatch - Next step after review passes