Executes next pending iteration from iterative-development roadmap: picks iteration, decomposes code/evidence tasks, runs sentinel baseline, dispatches implementing tasks, tests impacted/sentinel scenarios, updates artifacts.
npx claudepluginhub prime-radiant-inc/prime-radiant-marketplace --plugin iterative-developmentThis skill uses the workspace's default tool permissions.
Drives one iteration: picks the next pending, runs sentinel corpus baseline, runs pre-iteration scope review via PAR, decomposes into code and evidence tasks, dispatches `implementing-tasks`, runs impacted + sentinel scenarios at wrap-up, and updates the roadmap and iteration log.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Drives one iteration: picks the next pending, runs sentinel corpus baseline, runs pre-iteration scope review via PAR, decomposes into code and evidence tasks, dispatches implementing-tasks, runs impacted + sentinel scenarios at wrap-up, and updates the roadmap and iteration log.
Invoked by iterative-development inside the main loop. Each invocation runs exactly one iteration. After return, the orchestrator invokes auditing-progress.
All scripts referenced below live in this skill's scripts/ directory, next to this SKILL.md file.
Read docs/superpowers/iterations/roadmap.md, find the first iteration with status pending.
Read the per-epic files in docs/superpowers/iterations/requirements/ to load the full story cards for each committed story ID. Only read the epic files that contain stories for this iteration — not all of them. Also:
docs/superpowers/iterations/behavior-scenarios.md to identify impacted scenariosdocs/superpowers/iterations/behavior-corpus.md to identify sentinel scenariosBefore any code changes, run every scenario in the behavior corpus with run cadence sentinel:
This establishes whether regressions exist before the current iteration starts.
Before planning any work, verify that artifact state is consistent:
python3 "scripts/check_citations.py" docs/superpowers/iterations/roadmap.md docs/superpowers/iterations/requirements/ — if citations fail, stop and fix the roadmap.done:ITER-XXXX in the requirements index (unless code/tests actually exist for them)done in the requirements index actually have corresponding code and testsdone stories.If any inconsistencies are found, reconcile before proceeding. Do not trust any single artifact blindly — cross-check.
Following skills/shared/parallel-adversarial-review.md:
scope-reviewer-prompt.mdskills/shared/par-reviewer-wrapper.mdBreak the iteration scope into TDD-sized tasks. Each task = failing test → implementation → passing test → commit.
Evidence tasks: In addition to code tasks, identify:
Evidence tasks are first-class — they produce scenario updates, test harness extensions, and corpus index entries. They are NOT afterthoughts. Interleave evidence tasks with code tasks: after implementing a feature, the next task should be extending or adding the scenario that proves it.
Cross-iteration dependencies: Some stories reference subsystems that don't exist yet. For these, implement the thinnest abstraction boundary that satisfies the story's ACs without coupling to the future implementation. Prefer a single clean interface over a decomposed hierarchy — the real implementation will define its own internal structure when it arrives. Document the dependency with a TODO comment citing the future iteration. Do NOT defer the story silently or force premature integration.
Pass the task list (code + evidence tasks) and iteration context to implementing-tasks. Wait for completion.
After all tasks complete, run:
sentinelIf any impacted or sentinel scenario fails that passed at baseline (step 3), this iteration introduced a regression. Create a fix task and re-dispatch to implementing-tasks.
Grep the codebase for TODO(ITER-<current>) markers — these are interface stubs that earlier iterations created expecting THIS iteration to provide the real implementation.
For each marker found:
This step is a hard gate. An iteration that leaves its own TODO markers in the code is not done.
TODO(ITER-<current>) markers remain in the codebase (step 9)done:ITER-NNNN in the relevant epic files under requirements/behavior-scenarios.mdbehavior-corpus.mdroadmap.md to donedocs/superpowers/iterations/iteration-log.md — include:
python3 "scripts/validate_iteration_log.py" docs/superpowers/iterations/iteration-log.mdauditing-progress — that's the orchestrator's job)| Step | Tool/Skill | Purpose |
|---|---|---|
| Sentinel baseline | Run sentinel scenarios | Establish pre-iteration regression state |
| Citation check | scripts/check_citations.py | Mechanical: cited stories exist |
| Scope review | PAR + scope-reviewer-prompt.md | Semantic: scope, scenarios, splitting, boxing-in |
| Task execution | implementing-tasks | TDD code + evidence implementation |
| Post-iteration runs | Run impacted + sentinel scenarios | Catch regressions |
| TODO resolution | grep -rn 'TODO(ITER-<current>)' | Cross-iteration stubs resolved |
| Wrap up | scripts/validate_iteration_log.py | Artifact validation |
skills/shared/parallel-adversarial-review.md — PAR methodologyskills/shared/behavior-evidence-formats.md — scenario and proof obligation formatsscope-reviewer-prompt.md — scope reviewer prompt templatescripts/check_citations.py — mechanical citation check