From codagent
Reviews risky/notable assumptions and context gaps from implementor session reports, fixes high-confidence issues directly, and asks one clarifying question at a time for ambiguities.
npx claudepluginhub codagent-ai/agent-skills --plugin codagentThis skill uses the workspace's default tool permissions.
The implementor(s) produced session report(s) (via `codagent:session-report`) listing `risky` / `notable` assumptions and context gaps. Assume the implementors did not have the plan's intent; audit the findings against that intent, act on confident fixes, and ask about ambiguous cases.
Audits coding sessions for risky assumptions, notable choices, and major context gaps needing human review. Invoke via 'session report' or similar for execution retrospectives.
Reviews completed coding sessions to extract actionable improvements: DX friction, documentation gaps, architecture issues, anti-patterns, bug prevention, and tooling updates.
Reviews requirements and plan documents using parallel persona agents to surface role-specific issues, auto-fix quality problems, and pose strategic questions.
Share bugs, ideas, or general feedback.
The implementor(s) produced session report(s) (via codagent:session-report) listing risky / notable assumptions and context gaps. Assume the implementors did not have the plan's intent; audit the findings against that intent, act on confident fixes, and ask about ambiguous cases.
Work through these in order:
Before extracting findings, map reports to tasks:
implement-tasks_0) only as an iteration id; do not infer the task name from order or memory.step_start / iteration_start metadata when available to recover the task_file parameter.[iteration id | task file path | short task name].task unknown and treat that as an ambiguity to surface in the final summary.Do not renumber tasks in the final summary. Use the mapped task file path or short task name. This prevents mixing up "task 0" and "task 1" when workflow iteration order differs from human-readable task order.
Dispatch a subagent with this task:
## Assumption Audit (with ### Risky / ### Notable subsections) and ## Context Gaps sections.If nothing turns up, report "no findings to review" and stop.
Create a ledger before acting:
| id | iteration | task | severity | finding | disposition | evidence | action |
| --- | --- | --- | --- | --- | --- | --- | --- |
Allowed dispositions:
fixed — high-confidence issue fixed in code/docs.resolved-with-user — ambiguous issue answered by the user, then acted on or left as-is.accepted — reviewed and intentionally left unchanged because it matches the plan or risk is acceptable.deferred — real issue or risk remains but is outside this review's authority/scope; include why and who should handle it.context-gap — only for findings originally reported under ## Context Gaps, or for cases where the report itself cannot be mapped to a task.Hard requirements:
accepted; if it needs product input, mark it ambiguous and ask.For each finding, pick one bucket and act accordingly:
If a finding makes a claim about code ("I fixed X", "already handles Y"), have a subagent spot-check before classifying — don't take the implementor's word for it.
When you make your own code-based claim in the final summary, verify it first:
If you're guessing, it's ambiguous. Calibrate your confidence bar honestly.
For ambiguous findings, ask **one question at a time** via the `codagent:ask-questions` skill. This skill intentionally overrides the default batching strategy: each finding must be fully resolved (including applying the user's answer) before raising the next.Each clarifying question should:
After the user answers, apply the change exactly like a high-confidence fix (edit, commit) before moving on. If the answer is "leave as-is", note it and move on.
Context gaps are feedback on the plan itself — missing information that sent an implementor down a wrong path. Do not try to fix them. Collect verbatim and surface in the final summary so the user can update the plan, spec, or future task generation.
Do not move normal assumptions into this section unless the session report listed them as context gaps. For example, "implementation used a heuristic but could be stricter" is usually an assumption to accept, fix, defer, or ask about — not a context gap.
End with one message structured as:
## Review Summary
### Fixed (high-confidence)
- [task X] — [assumption] → [what you changed, commit sha]
### Resolved with user
- [task X] — [assumption] → [user's choice] → [what you changed, or "left as-is"]
### Reviewed, no change
- [task X] — [assumption] → [accepted/deferred] — [evidence-backed rationale]
### Context gaps
- [task X] — [gap, verbatim] — Missing: [what was needed]
Omit any section with zero items.
Every non-context-gap finding must appear in exactly one of Fixed, Resolved with user, or Reviewed, no change. Context gaps must appear in Context gaps.