From srnnkls-tropos
Create spec documents (spec.md, context.md, tasks.yaml, dependencies.yaml, validation.yaml). Receives validation data from spec-validate.
npx claudepluginhub joshuarweaver/cascade-code-general-misc-2 --plugin srnnkls-troposThis skill uses the workspace's default tool permissions.
Creates structured tracking documents for complex development tasks.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Creates structured tracking documents for complex development tasks.
DO use for:
DON'T use for:
If invoked after spec-validate:
Derive from: git branch name, ExitPlanMode plan name, or user-provided argument.
Format: kebab-case (e.g., add-temporal-joins). If unclear, ask user.
Read relevant files to understand: task goal, key files, central types, architectural decisions, current vs target state.
If input contains code blocks (python, rust, etc.):
Extract code blocks with their language tags
Create resources directory:
mkdir -p ./specs/draft/[task-name]/resources/
Stage extracted content (held in memory until user chooses):
Ask user which resources to create (use AskUserQuestion with multiSelect):
Header: Resources
Question: Which implementation artifacts should be preserved?
multiSelect: true
Options:
- implementation: Code sketches/examples - patterns to follow (loqui-validated)
- schemas: API contracts, data models, type definitions
- config: Configuration examples
- patterns: Integration and test patterns
- assets: Diagrams, screenshots, other media
- none: Skip resources, spec only
Create selected resources:
loqui_check sectionContinue with spec generation using extracted requirements (not code details)
Resources directory structure (all subdirectories optional):
specs/draft/{spec-name}/
├── spec.md
├── context.md
├── tasks.yaml
├── ...
└── resources/ # Only if user selected any
├── implementation.md # If "implementation" selected
├── schemas/ # If "schemas" selected
├── config/ # If "config" selected
├── patterns/ # If "patterns" selected
└── assets/ # If "assets" selected
Only for Initiative/Feature (skip for Task):
Question 1: Select reviewers:
Header: Reviewers
Question: Which reviewers should analyze implementation batches?
multiSelect: true
Options:
- claude-opus: Claude Opus - native reviewer, comprehensive (Recommended)
- claude-sonnet: Claude Sonnet - faster native review
- openai-gpt5.2: OpenAI GPT-5.2 - base model
- openai-gpt5.2-codex: OpenAI GPT-5.2 Codex - code-specialized
- openai-gpt5.2-pro: OpenAI GPT-5.2 Pro - extended capabilities (Recommended)
- gemini-3-flash: Google Gemini 3 Flash - fast, efficient
- gemini-3-pro: Google Gemini 3 Pro - advanced reasoning (Recommended)
Default selection: claude-opus, openai-gpt5.2-pro, gemini-3-pro
Question 2: Select reasoning effort (if OpenCode reviewers selected):
Header: Reasoning
Question: What reasoning effort level for OpenCode reviewers?
multiSelect: false
Options:
- low: Quick responses, minimal deliberation
- medium: Balanced reasoning (Recommended)
- high: Deep analysis, thorough deliberation
- xhigh: Maximum reasoning (GPT-5.2 only)
Default: medium
Store in validation.yaml under review_config:
review_config:
reasoning_effort: medium # low | medium | high | xhigh
reviewers:
- type: claude
model: opus # or sonnet
- type: opencode
model: openai/gpt-5.2
- type: opencode
model: google/gemini-3-pro-preview
Variant format: {reasoning_effort}-medium (verbosity fixed at medium)
Model mapping:
claude-opus → {type: claude, model: opus}claude-sonnet → {type: claude, model: sonnet}openai-gpt5.2 → {type: opencode, model: openai/gpt-5.2}openai-gpt5.2-codex → {type: opencode, model: openai/gpt-5.2-codex}openai-gpt5.2-pro → {type: opencode, model: openai/gpt-5.2}gemini-3-flash → {type: opencode, model: google/gemini-3-flash-preview}gemini-3-pro → {type: opencode, model: google/gemini-3-pro-preview}mkdir -p ./specs/draft/[task-name]/
Generate these files:
spec.md - Strategic spec (review this)context.md - Implementation context (update as you work)tasks.yaml - Work checklist (dignity CLI, TodoWrite sync)dependencies.yaml - Task dependency graph (parallel dispatch)validation.yaml - Audit trail and gate checksDocument scaling by issue type:
| Document | Initiative | Feature | Task |
|---|---|---|---|
| spec.md | Full strategic | Standard | Lightweight |
| context.md | High-level | Standard | Skip |
| tasks.yaml | Feature breakdown + phases | Task breakdown | Single task |
| dependencies.yaml | Full DAG | Phase-based | Skip |
| validation.yaml | Full (7 areas + gates) | Full (7 areas) | Skip |
Task output = 2 files: spec.md (lightweight) + tasks.yaml
Frontmatter for spec.md:
---
issue_type: [Initiative|Feature|Task]
created: [Date]
status: Draft
stage: draft
claude_plan: [path to native Claude plan file, if exists]
---
If a native Claude plan exists (from EnterPlanMode/ExitPlanMode), include its path in the claude_plan field. This links the detailed spec to its originating design document.
Based on validation data and issue type, include optional sections:
spec.md:
| Section | Initiative | Feature | Task |
|---|---|---|---|
| User Stories (P1/P2/P3) | Include | Skip | Skip |
| Given/When/Then Acceptance | Full | Standard | Simple |
| API Contract | Include if API work | Opt-in | Skip |
| Implementation Strategy | Include | Skip | Skip |
context.md:
| Section | Initiative | Feature | Task |
|---|---|---|---|
| Tech Decisions | Include | Opt-in | (file skipped) |
| Data Model | Include | Opt-in | (file skipped) |
validation.yaml:
| Section | Initiative | Feature | Task |
|---|---|---|---|
| Gates | Evaluate all | n/a | (file skipped) |
| Complexity Tracking | If violations | If violations | (file skipped) |
| Markers | Full | Full | (file skipped) |
| Review Config | Full | Full | (file skipped) |
tasks.yaml:
| Element | Initiative | Feature | Task |
|---|---|---|---|
| Phases with checkpoints | Full (gates between phases) | Simple (optional checkpoints) | Skip |
| Evidence tracking | Full | Opt-in | Skip |
| Dependencies | Full | Opt-in | Skip |
tasks.yaml structure:
spec: ${SPEC_NAME}
code: ${CODE} # Prefix for task IDs (e.g., FEAT → FEAT-001)
next_id: 1
tasks:
- id: ${CODE}-001
content: Task description
status: pending # pending | in_progress | completed
active_form: Doing task description
meta:
created: ${DATE}
last_updated: ${DATE}
progress: 0/N
phases: # Optional: organize tasks with checkpoints
- name: "Phase 1: Setup"
task_ids: [${CODE}-001, ${CODE}-002]
checkpoint:
description: Setup complete
criteria: [...]
verified: false
Used by dignity for task management operations. The code field generates sequential IDs (e.g., AUT-001, AUT-002).
Parse the just-created tasks.yaml and populate TodoWrite:
tasks.yamlExample:
# tasks.yaml:
tasks:
- id: IMPL-001
content: Create implement.md command
status: pending
active_form: Creating implement.md command
- id: IMPL-002
content: Create docs-implement SKILL.md
status: pending
active_form: Creating docs-implement SKILL.md
// TodoWrite:
[
{"content": "Create implement.md command", "status": "pending", "activeForm": "Creating implement.md command"},
{"content": "Create docs-implement SKILL.md", "status": "pending", "activeForm": "Creating docs-implement SKILL.md"}
]
Show user:
./specs/draft/[task-name]//spec.review or /spec.promote when ready to activate"After presenting summary, offer spec review:
Use AskUserQuestion:
Header: Review
Question: Would you like a comprehensive spec review before implementation?
multiSelect: false
Options:
- Yes: Run multi-agent review (Claude + OpenCode)
- Later: Skip for now, use /spec.review when ready
- Skip: Proceed without review
If "Yes": Invoke spec-review skill with the just-created spec name.
For humans (review these):
├── spec.md # WHY & WHAT - Strategic requirements, acceptance criteria
├── context.md # WHAT WE LEARNED - Key files, decisions, gotchas
└── resources/ # HOW TO BUILD - Implementation details (when provided)
For tooling (infrastructure):
├── tasks.yaml # Progress tracking, TodoWrite sync
├── dependencies.yaml # Parallel dispatch DAG
└── validation.yaml # Audit trail, gate checks, reviewer config, loqui validation
The YAML files are infrastructure - humans can read them for debugging or auditing, but the primary consumers are tooling and automation.
validation.yaml usage:
review_config.reviewers - Used by task-dispatch for batch reviewsgates - Pre-implementation gate checks for Initiativesmarkers - Unresolved items requiring clarificationOnly created when implementation details are provided in input. Contains structured artifacts:
| Subdirectory | Content |
|---|---|
implementation.md | Code sketches and examples - patterns to follow, not tested/final (loqui-validated) |
schemas/ | API contracts, data models, type definitions |
config/ | Configuration examples |
patterns/ | Integration and test patterns |
assets/ | Diagrams, screenshots, other media |
Review burden: 2-3 documents (spec.md, context.md, resources/ when present), not 8+.
See guidelines.md for detailed breakdown.
If a native Claude plan exists (from EnterPlanMode), add a Native Plan section to context.md:
## Native Plan
**Source:** `/path/to/.claude/specs/spec-name.md`
Summary of the original design:
- Goal: [brief goal from native plan]
- Approach: [key approach decisions]
- Open questions resolved: [any clarifications made]
This preserves the connection to the original design discussion and rationale.
Located in templates/ directory:
Invoked by: /spec.create command
spec-validate first, then this skillRelated commands:
/spec.review - Review spec with multiple AI reviewers/spec.update - Sync spec with git history/spec.archive - Archive completed spec/spec.issues - Generate GitHub issues from spec