Audits post-iteration behavior evidence quality in three tiers: deep evidence for stories, impacted scenarios, sentinel corpus regressions using parallel adversarial review.
npx claudepluginhub prime-radiant-inc/prime-radiant-marketplace --plugin iterative-developmentThis skill uses the workspace's default tool permissions.
Runs after every iteration as part of the planning cycle. Verifies behavior evidence quality in three tiers using **parallel adversarial review (PAR)** — two paired auditor subagents evaluate the same work in parallel with competitive framing.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Runs after every iteration as part of the planning cycle. Verifies behavior evidence quality in three tiers using parallel adversarial review (PAR) — two paired auditor subagents evaluate the same work in parallel with competitive framing.
The audit answers: "Does durable, reusable evidence exist at the correct seam for every externally observable behavior this iteration touched?"
Invoked by iterative-development after every running-an-iteration call, before picking the next iteration.
Read the per-epic requirement files in docs/superpowers/iterations/requirements/, docs/superpowers/iterations/behavior-scenarios.md, and docs/superpowers/iterations/behavior-corpus.md:
done:ITER-<current> and scenarios added or updated in this iteration. Audit every AC and its proof obligation thoroughly.sentinel in the behavior corpus. Compare against the pre-iteration baseline from running-an-iteration step 3.Following the PAR methodology in skills/shared/parallel-adversarial-review.md:
auditor-subagent-prompt.md. Include ALL THREE tiers:
skills/shared/par-reviewer-wrapper.mdFollowing PAR aggregation rules:
requirements/ (status pending) or flip existing stories back from done to pendingroadmap.md to add a follow-up iteration for the gapsReturn the audit result (clean or gaps) to the orchestrator. The orchestrator decides whether to loop or terminate.
| Tier | What it checks | Failure means |
|---|---|---|
| Deep evidence | Every AC + proof obligation for current iteration | Story not done, evidence too weak |
| Impacted behavior | Scenarios whose surfaces were touched | Stale or broken scenario |
| Sentinel corpus | High-value journey scenarios | Regression in previously-working behavior |
| Reads | Writes | Dispatches |
|---|---|---|
requirements/, behavior-scenarios.md, behavior-corpus.md, product code/tests | requirements/ (gaps), roadmap.md (new iteration) if gaps, behavior-scenarios.md (stale flags) | Two auditor subagents in parallel (PAR) |
skills/shared/parallel-adversarial-review.md — PAR methodologyskills/shared/par-reviewer-wrapper.md — competitive framing wrapperskills/shared/behavior-evidence-formats.md — scenario and proof obligation formatsauditor-subagent-prompt.md — auditor-specific prompt template