Orchestrates iterative development for large/ambiguous project specs: extracts requirements with proofs and scenarios, scopes walking skeleton, loops audited sprints building behavior evidence until all pass.
npx claudepluginhub prime-radiant-inc/prime-radiant-marketplace --plugin iterative-developmentThis skill uses the workspace's default tool permissions.
Orchestrator for the iterative-development plugin. Drives the full autonomous lifecycle: extract requirements with proof obligations and behavior scenarios from human spec collateral, define a walking skeleton that passes its first journey scenario, then loop through audited sprints that continuously build a reusable behavior evidence corpus. Completion means the product has passing behavior ev...
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Orchestrator for the iterative-development plugin. Drives the full autonomous lifecycle: extract requirements with proof obligations and behavior scenarios from human spec collateral, define a walking skeleton that passes its first journey scenario, then loop through audited sprints that continuously build a reusable behavior evidence corpus. Completion means the product has passing behavior evidence at the correct seam for every externally observable requirement — not just that stories are marked done. Every evaluative gate uses parallel adversarial review (PAR).
This is an alternative to superpowers:writing-plans → superpowers:subagent-driven-development for projects where the upfront-planning approach would lose the plot.
Do NOT use for small, bounded projects — superpowers:writing-plans → superpowers:subagent-driven-development is simpler and more appropriate.
docs/superpowers/iterations/ for existing state. If found, skip to Resume below.extracting-requirements on the human-provided spec path.
docs/superpowers/iterations/requirements/, docs/superpowers/iterations/behavior-scenarios.md, docs/superpowers/iterations/behavior-corpus.mdscoping-the-simplest-core on the resulting backlog.
docs/superpowers/iterations/roadmap.mdwhile True:
check_for_human_interrupt()
if not roadmap has pending iterations:
if last audit was clean:
run final behavior-evidence audit (see below)
if behavior audit clean:
break # done
# else: audit found uncovered surfaces or weak evidence, new iterations added
# else: audit found gaps, new iterations were added, continue
run next iteration:
- running-an-iteration (sentinel baseline → scope review → decompose code + evidence tasks → implementing-tasks → impacted + sentinel scenario runs → wrap up)
audit:
- auditing-progress (PAR paired auditors, three-tier: deep evidence + impacted behavior + sentinel corpus)
- if gaps: append to backlog, revise roadmap, continue
- if clean: mark last_audit_clean, continue
Before declaring the project complete, verify that the product has adequate behavior evidence — not just that all stories are marked done:
The final question is: "Can the system point to passing behavior evidence for every externally observable requirement the spec describes?" Not: "Are the stories done?"
All process state lives in artifact files:
docs/superpowers/iterations/requirements/ (backlog with story status and proof obligations)docs/superpowers/iterations/behavior-scenarios.md (scenario cards with stable IDs)docs/superpowers/iterations/behavior-corpus.md (execution index)docs/superpowers/iterations/roadmap.md (iteration plan with status)docs/superpowers/iterations/iteration-log.md (completed iteration history)On re-invocation: read roadmap.md, find the next pending iteration, and continue from there. There is no ephemeral in-memory state to recover. The command "continue iterative development with the existing plan" always works.
If the orchestrator crashed mid-iteration, the partially-completed iteration's git commits are preserved. On resume, the next un-started iteration picks up. If the in-progress iteration left the code in a broken state, treat it as a gap — the audit will catch it and add corrective work.
The loop runs without human intervention. The only way the human injects new information mid-run is by interrupting between iterations.
How it works:
extracting-requirements in incremental mode on the changed spec files, merge new/revised story cards into the backlog, revise the roadmap if changes invalidate downstream iterations, then resumeGuarantees:
deferred, not deleted.What does NOT trigger interrupt processing:
The autonomous loop may run for hours. Two progress mechanisms ensure visibility without requiring interruption:
1. Progress file: Write docs/superpowers/iterations/progress.md at each phase transition:
# Progress
**Phase:** implementing ITER-0003
**Task:** 4/7 (CleanupPipeline integration)
**Iterations:** 3/18 done, 15 pending
**Sentinel corpus:** 10/10 passing
**Last event:** 2026-04-11T14:23:00Z — Task 3 committed
Update this file at: iteration start, each task completion, iteration wrap-up, audit start/end. Overwrite (not append) — it's a snapshot of current state, not a log.
2. Git log: Every task produces a commit. The commit history is a detailed progress trail. A human can check git log --oneline for fine-grained status without interrupting the loop.
When running autonomously, this orchestrator takes precedence over interactive-gate skills (e.g., brainstorming which requires design approval before implementation). The iterative-development process has its own design gates (scope review, PAR) that replace interactive approval. Do not block on skills that assume a human is present to approve each step.
Catastrophe-only. The loop is autonomous. Human escalation is reserved for total failure — the plugin cannot make any forward progress at all.
These do NOT trigger escalation:
The orchestrator does NOT prompt "should I continue?" between iterations.
| Phase | Skill | What it does |
|---|---|---|
| Extract | extracting-requirements | Chunk → parallel extract → aggregate → requirements/ |
| Scope | scoping-the-simplest-core | Walking skeleton + iterations → roadmap.md (with PAR scope review) |
| Implement | running-an-iteration | Scope review → decompose → implementing-tasks → wrap up |
| Task execution | implementing-tasks | Per-task: implementer → PAR spec review → PAR quality review |
| Audit | auditing-progress | PAR paired auditors, two-tier (deep + sweep) |
All plugin artifacts live in docs/superpowers/iterations/. Never modify the human's spec collateral.
| File | Purpose |
|---|---|
requirements/ | Backlog: story cards + epics with stable IDs and proof obligations |
behavior-scenarios.md | Behavior contracts: reusable scenario cards with stable IDs |
behavior-corpus.md | Execution index: scenario → seam → cadence → command |
roadmap.md | Sprint plan: ordered iterations with impacted scenarios |
iteration-log.md | Sprint history: what each iteration delivered + scenarios added |
progress.md | Live snapshot: current phase, task, iteration counts, sentinel status |
Every evaluative gate uses parallel adversarial review (PAR):
See skills/shared/parallel-adversarial-review.md for PAR methodology.