Extracts requirements from human spec collateral using chunking and parallel subagents, producing per-epic files with proof obligations and stable-ID behavior scenarios for iterative development.
npx claudepluginhub prime-radiant-inc/prime-radiant-marketplace --plugin iterative-developmentThis skill uses the workspace's default tool permissions.
Reads arbitrary human spec collateral and produces two artifact sets:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Reads arbitrary human spec collateral and produces two artifact sets:
docs/superpowers/iterations/requirements/ — story cards with proof obligations per ACdocs/superpowers/iterations/behavior-scenarios.md — reusable observable-behavior contracts with stable IDsUses a chunking + parallel-dispatch + aggregation pipeline so that no single agent holds the entire spec in context. Handles specs from a single page up to ~100K tokens across dozens of files.
Invoked by iterative-development during bootstrap, or standalone when you need to regenerate requirements from human spec collateral.
All scripts referenced below live in this skill's scripts/ directory, next to this SKILL.md file.
The spec directory structure drives proof seam classification. See skills/shared/behavior-evidence-formats.md for the full taxonomy. Summary:
| Spec directory | Default proof seam |
|---|---|
test-vectors/ | unit |
contracts/ | integration |
domains/ | integration or app-level |
journeys/ | e2e |
Extraction subagents use the appropriate prompt variant based on source file location.
Enumerate the spec files without reading full contents:
python3 "scripts/chunk_spec.py" <spec-path>
This produces a JSON array of chunks. Each chunk has source_file, heading, start_line, end_line, content, and estimated_tokens. Small files (< 4K tokens) are kept whole. Larger files are split by ## headings, or ### if sections are still too large.
Classify each chunk by spec taxonomy: note whether the source file is under journeys/, contracts/, domains/, or test-vectors/. This determines which extraction prompt variant to use.
For each chunk (or batch of small chunks), dispatch an extraction subagent using the appropriate template from extraction-subagent-prompt.md:
journeys/ → use the Journey Extraction prompt variantPass the chunk content inline — do NOT make the subagent read the file.
Payload integrity: If your platform has output token limits that could truncate the chunk before it reaches the subagent prompt, stage each chunk individually and verify the subagent received the complete content (e.g., by checking that the extracted stories reference lines from the full range of the chunk). Partial payloads are easy to miss and cause silent under-extraction.
Dispatch strategy:
Before aggregation, run a PAR omission review. The sole job of this review is to find requirements AND scenarios that the extraction subagents dropped.
For each chunk (or batch of chunks), dispatch two reviewers in parallel following skills/shared/parallel-adversarial-review.md:
This pass is required, not optional. Extraction subagents optimize for what they notice; omission reviewers optimize for what's missing.
Run the story aggregation script on all extracted story JSONs (including any added by the omission review):
python3 "scripts/aggregate_stories.py" -o docs/superpowers/iterations/requirements/ <json-file-1> <json-file-2> ...
The script combines, deduplicates by title, groups into epics, assigns stable STORY/EPIC IDs, and outputs per-epic files with proof obligations preserved.
Run the scenario aggregation script:
python3 "scripts/aggregate_scenarios.py" \
-o docs/superpowers/iterations/behavior-scenarios.md \
--stories-dir docs/superpowers/iterations/requirements/ \
<json-file-1> <json-file-2> ...
The script combines, deduplicates by title, assigns stable SCENARIO/JOURNEY IDs, resolves story title references to STORY-IDs, and outputs behavior-scenarios.md.
Same as before: review the epic list, merge near-duplicates, re-run aggregation. See the consolidation rules in the original extraction skill documentation.
Additional consolidation check: after merging, verify that scenario owning_story_titles still resolve correctly. If stories were deduplicated during re-aggregation, re-run scenario aggregation to update resolved refs.
After both aggregations complete, run the back-linking script to update per-epic story files with scenario references:
python3 "scripts/backlink_scenarios.py" \
docs/superpowers/iterations/behavior-scenarios.md \
docs/superpowers/iterations/requirements/
The script reads scenario → owning-story mappings from behavior-scenarios.md and appends scenario:SCENARIO-NNNN or scenario:JOURNEY-NNNN to AC lines in the epic files that have observable behavioral impact. AC lines that already have scenario refs are skipped.
This creates the bidirectional link: stories → scenarios (via AC lines) and scenarios → stories (via owning_stories field).
Build a coverage ledger that maps every spec chunk to its extracted stories AND scenarios. This is the traceable proof that extraction is complete.
For each chunk from the inventory (step 1):
source_file, heading, start_line–end_line**Sources:** field cites overlapping lines in that file**Sources:** field cites overlapping lines in that fileHard gates:
Journey coverage check: every journey spec file MUST produce at least one JOURNEY-NNNN scenario that preserves the complete step sequence. If a journey file only produced stories (no journey scenario), that is a gap.
Create the initial docs/superpowers/iterations/behavior-corpus.md from the scenario list:
# Behavior Corpus
| Scenario ID | Title | Proof seam | Run cadence | Command | Owning stories |
|---|---|---|---|---|---|
Populate with all scenarios. Set run cadence:
sentinel (they run every iteration)iteration (default, refined during scoping)Set command to TBD — the implementing iterations will fill these in.
python3 "scripts/validate_requirements_index.py" docs/superpowers/iterations/requirements/
python3 "scripts/validate_scenarios.py" docs/superpowers/iterations/behavior-scenarios.md docs/superpowers/iterations/requirements/
If validation fails, inspect the output, fix formatting issues, and re-validate.
git add docs/superpowers/iterations/requirements/
git add docs/superpowers/iterations/behavior-scenarios.md
git add docs/superpowers/iterations/behavior-corpus.md
git commit -m "docs: add requirements with proof obligations, behavior scenarios, and corpus index"
| Step | Tool | Input | Output |
|---|---|---|---|
| Chunk | scripts/chunk_spec.py | spec path | JSON chunks (stdout) |
| Extract | Subagent + extraction-subagent-prompt.md | chunk content | JSON stories + scenarios (per subagent) |
| Omission review | PAR (source text vs. stories + scenarios) | chunks + stories + scenarios | Missing requirements and scenarios |
| Aggregate stories | scripts/aggregate_stories.py -o <dir> | JSON files | Per-epic .md files with proof obligations |
| Aggregate scenarios | scripts/aggregate_scenarios.py -o <file> | JSON files + stories dir | behavior-scenarios.md |
| Back-link | scripts/backlink_scenarios.py | scenarios + stories | Updated AC lines with scenario refs |
| Coverage ledger | Map chunks → story IDs + scenario IDs | chunk list, stories, scenarios | Gap/covered/story-only per chunk |
| Init corpus | Write corpus index | scenario list | behavior-corpus.md |
| Validate | scripts/validate_requirements_index.py + scripts/validate_scenarios.py | .md files | OK or errors |
Hierarchical reduce (specs > 1M tokens), huge-spec decomposition, incremental re-extraction.