From octo
Guides handling code review feedback: read and restate comments, verify against codebase, evaluate technical soundness, respond with acknowledgment or evidence-based pushback. Prevents blind implementations and bugs from unverified changes.
npx claudepluginhub nyldn/claude-octopus --plugin octoThis skill uses the workspace's default tool permissions.
> **Host: Codex CLI** — This skill was designed for Claude Code and adapted for Codex.
Evaluates code review feedback rigorously before implementing: restate requirements, verify codebase assumptions, assess technical soundness, respond factually or push back. Use when receiving comments in implement phase.
Evaluates code review feedback for technical accuracy, clarifies unclear points, verifies against codebase, and implements changes only after validation.
Enforces strict 6-step protocol for processing code review feedback: read fully, restate, verify against codebase, evaluate architecture fit, technical response or pushback, implement one-by-one.
Share bugs, ideas, or general feedback.
Host: Codex CLI — This skill was designed for Claude Code and adapted for Codex. Cross-reference commands use installed skill names in Codex rather than
/octo:*slash commands. Use the active Codex shell and subagent tools. Do not claim a provider, model, or host subagent is available until the current session exposes it. For host tool equivalents, seeskills/blocks/codex-host-adapter.md.
Code review requires technical evaluation, not performative agreement.
Never blindly implement review feedback. Verify it's correct for THIS codebase before changing anything.
WHEN receiving code review feedback:
1. READ — Complete feedback without reacting
2. RESTATE — Summarize the requirement in your own words
3. VERIFY — Check against actual codebase state
4. EVALUATE — Is this technically sound for THIS context?
5. RESPOND — Technical acknowledgment OR reasoned pushback
6. IMPLEMENT — One item at a time, verify each change
NEVER say:
These are social performance, not technical evaluation. They lead to:
For each piece of feedback:
| Question | If YES | If NO |
|---|---|---|
| Is the issue real? (verify in code) | Continue evaluation | Push back with evidence |
| Does the suggested fix work here? | Continue evaluation | Propose alternative |
| Does fixing this break something else? | Fix both or push back | Implement the fix |
| Is this a style preference or a real problem? | Acknowledge, deprioritize | Fix it |
| Was this already considered and rejected? | Explain the trade-off | Implement |
When feedback is wrong or doesn't apply:
> Reviewer: "This function should handle null input"
>
> Response: "Checked — this function is only called from `processUser()`
> (line 47) which validates non-null before dispatch. Adding null handling
> here would be dead code. The caller contract guarantees non-null."
Provide:
In Claude Octopus workflows, review feedback comes from multiple sources:
When providers disagree:
When a reviewer flags an issue and you fix it:
If the same issue keeps coming back:
If a reviewer suggests something that contradicts the spec/requirements:
Requirements trump review suggestions. User intent trumps both.