Reviews code changes using parallel personas for correctness, testing, maintainability, and conditional areas like security, performance, APIs. Merges into P0-P3 severity reports for PR prep and iterative feedback.
npx claudepluginhub tmchow/tmc-marketplace --plugin iterative-engineeringThis skill uses the workspace's default tool permissions.
Reviews code changes using dynamically selected reviewer personas. Spawns parallel sub-agents that return structured JSON, then merges and deduplicates findings into a single report.
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
Reviews code changes using dynamically selected reviewer personas. Spawns parallel sub-agents that return structured JSON, then merges and deduplicates findings into a single report.
iterative:implementing skill)All reviewers use P0–P3:
| Level | Meaning | Action |
|---|---|---|
| P0 | Critical breakage, exploitable vulnerability, data loss/corruption | Must fix before merge |
| P1 | High-impact defect likely hit in normal usage, breaking contract | Should fix |
| P2 | Moderate issue with meaningful downside (edge case, perf regression, maintainability trap) | Fix if straightforward |
| P3 | Low-impact, narrow scope, minor improvement | User's discretion |
8 personas in two tiers. See references/persona-catalog.md for the full catalog.
Always-on (every review):
| Agent | Focus |
|---|---|
correctness-reviewer | Logic errors, edge cases, state bugs, error propagation |
testing-reviewer | Coverage gaps, weak assertions, brittle tests |
maintainability-reviewer | Coupling, complexity, naming, dead code, abstraction debt |
Conditional (selected per diff):
| Agent | Select when diff touches... |
|---|---|
security-reviewer | Auth, public endpoints, user input, permissions |
performance-reviewer | DB queries, data transforms, caching, async |
api-contract-reviewer | Routes, serializers, type signatures, versioning |
data-migrations-reviewer | Migrations, schema changes, backfills |
reliability-reviewer | Error handling, retries, timeouts, background jobs |
By default, every review spawns all 3 always-on reviewers plus any applicable conditionals — the tier model naturally right-sizes. A small config change triggers 0 conditionals = 3 reviewers. A large auth feature triggers security + maybe reliability = 5 reviewers. No separate "mode" is needed.
Compute the diff range, file list, and diff in a single Bash call. This minimizes permission prompts. Do not run extra commands.
Chain everything into one command using && and labeled output markers (BASE:, FILES:, DIFF:) so you can parse each section:
Standalone example (single Bash call):
BASE=$(git merge-base HEAD $(git rev-parse --verify origin/main 2>/dev/null && echo origin/main || echo origin/master)) && echo "BASE:$BASE" && echo "FILES:" && git diff --name-only ${BASE}..HEAD -- . ':!*.md' && echo "DIFF:" && git diff -U10 ${BASE}..HEAD -- . ':!*.md'
Parse: BASE: = merge-base SHA, FILES: = file list, DIFF: = diff. If no commits on the branch, fall back to unstaged changes (git diff -U10 -- . ':!*.md').
Understand what the change is trying to accomplish. Run a single bash call:
echo "BRANCH:" && git rev-parse --abbrev-ref HEAD && echo "COMMITS:" && git log --oneline ${BASE}..HEAD
Combined with conversation context (plan section summary, caller-provided description), write a 2-3 line intent summary:
Intent: Simplify tax calculation by replacing the multi-tier rate lookup
with a flat-rate computation. Must not regress edge cases in tax-exempt handling.
Pass this to every reviewer in their spawn prompt. Intent shapes how hard each reviewer looks, not which reviewers are selected.
When intent is ambiguous: Ask one question: "What is the primary goal of these changes?" Do not spawn reviewers until intent is established.
Read the diff and file list from Stage 1. The 3 always-on reviewers are automatic. For each conditional persona in the catalog (references/persona-catalog.md), decide whether the diff warrants it. This is agent judgment, not keyword matching.
Announce the team before spawning:
Review team:
- correctness (always)
- testing (always)
- maintainability (always)
- security — new endpoint in routes.rb accepts user-provided redirect URL
- data-migrations — adds migration 20260303_add_index_to_orders
This is progress reporting, not a blocking confirmation.
Spawn each selected reviewer as a parallel sub-agent using the template in references/subagent-template.md. Each sub-agent receives:
references/diff-scope.mdreferences/findings-schema.jsonSub-agents are read-only: they review and return structured JSON. They do not edit files, run commands, or propose refactors.
Each sub-agent returns JSON matching references/findings-schema.json:
{
"reviewer": "security",
"findings": [...],
"residual_risks": [...],
"testing_gaps": [...]
}
Convert multiple reviewer JSON payloads into one deduplicated, confidence-gated finding set.
normalize(file) + line_bucket(line, ±3) + normalize(title). When fingerprints match, merge: keep highest severity, keep highest confidence with strongest evidence, union evidence, note which reviewers flagged it.pre_existing: true into a separate list.Assemble the final report using the template in references/review-output-template.md:
Do not include time estimates. When invoked from iterative:implementing: omit the **Fix order:** line — implementing handles prioritization through its own severity acceptance flow.
This skill does NOT use language-specific reviewer agents. Reviewers adapt their criteria to the language/framework based on project context (loaded automatically). This keeps the skill simple and avoids maintaining parallel reviewers per language.
When invoked from iterative:implementing: return findings directly — implementing owns its own fix loop. Do not enter the standalone fix loop below.
When invoked standalone or from implementation-wrapup: run the standalone fix loop.
After presenting findings and verdict (Stage 6), handle the full fix-review cycle.
This is its own prompt — do not combine it with next-step options. Present severity acceptance whenever the review has findings at ANY severity. Do not interpret "no P0/P1" as "clean" — clean means zero findings. If zero findings, skip to Step 8. Use the platform's interactive question tool — AskUserQuestion (Claude Code) or request_user_input (Codex) — for all severity acceptance prompts. Both platforms provide an automatic "Other" free-form option — do not add one manually.
Present a single prompt listing all severity levels with findings. No intermediate "choose which..." step.
Claude Code — use AskUserQuestion with multiSelect: true:
When P0 or P1 issues exist, pre-check the P0+P1 option:
When only P2/P3 issues exist, nothing pre-checked:
Only include severity levels that have findings.
Codex — use request_user_input (single-select, build combined options):
When P0 or P1 issues exist:
When only P2/P3 issues exist:
Only include options where findings exist at those levels. Omit options that would duplicate another (e.g., if no P3, omit "Fix all" since it equals the line above).
Fix only the selected severities. Spawn one or more subagents with the filtered findings, affected file paths, and diff range from Stage 1. Each subagent applies fixes, runs tests, and commits.
Wait for all fixes to complete before proceeding.
After fixes land, present an interactive choice:
If another round: run the full Stage 1–Step 7 flow again (fresh sub-agents, fresh scope).
After the fix-review cycle completes (clean verdict or user chose to stop), present next steps via the platform's interactive question tool.
On a feature branch:
On main/master:
If "Create a PR": push the branch and use gh pr create with a title and summary derived from the branch changes.
If the platform doesn't support parallel sub-agents, run reviewers sequentially. Everything else (stages, output format, merge pipeline) stays the same.