Spec-drift-aware review feedback handling for CCC Stage 6. When receiving PR review comments or adversarial review findings, cross-references each item against the active spec's acceptance criteria to determine if feedback is in-scope, represents drift, or reveals a legitimate spec gap. Follows the READ-UNDERSTAND-VERIFY-EVALUATE-RESPOND-IMPLEMENT protocol. Integrates with adversarial-review output and .ccc-state.json task context. Use when receiving code review comments, adversarial review findings, PR feedback, or any post-implementation critique that may require code changes. Trigger with phrases like "handle review feedback", "respond to review", "PR comments", "review findings", "is this feedback in scope", "spec drift from review", "adversarial review response", "triage review comments", "reviewer suggests".
npx claudepluginhub cianos95-dev/claude-command-centre --plugin claude-command-centreThis skill uses the workspace's default tool permissions.
Spec-drift-aware protocol for handling review feedback during CCC Stage 6. This skill is the _receiving_ counterpart to the `adversarial-review` skill: adversarial-review defines how to _give_ structured feedback; review-response defines how to _receive, triage, and act on_ it.
Intakes PR review feedback, code comments, or issues for structured processing before action. Verifies suggestions against codebase in default PR Review Mode; triages issues into ready-for-agent states in Issue Triage Mode.
Handles code review feedback rigorously: verifies suggestions, resolves unclear items, implements fixes in priority order. Useful when addressing review comments.
Process code review feedback rigorously: read fully, understand issues, verify claims against code, evaluate fixes, respond with evidence, and implement changes. For PRs, agent reviews, external feedback.
Share bugs, ideas, or general feedback.
Spec-drift-aware protocol for handling review feedback during CCC Stage 6. This skill is the receiving counterpart to the adversarial-review skill: adversarial-review defines how to give structured feedback; review-response defines how to receive, triage, and act on it.
Review feedback is the most common source of uncontrolled scope creep. Without a spec-aware triage protocol:
Every piece of review feedback passes through six stages:
READ → UNDERSTAND → VERIFY-AGAINST-SPEC → EVALUATE → RESPOND → IMPLEMENT
No stages may be skipped. In particular, VERIFY-AGAINST-SPEC must happen before EVALUATE — you cannot assess whether feedback is worth implementing until you know whether it's in-scope.
Goal: Capture the reviewer's complete intent without interpretation.
Process:
Read the full comment. Not just the suggestion — the context, the reasoning, the severity.
Identify the feedback type. Review feedback falls into categories:
| Type | Signal | Example |
|---|---|---|
| Defect | "This is broken" | "This will crash when input is null" |
| Specification | "This doesn't match the spec" | "AC says 20 per page but this returns all results" |
| Enhancement | "This could be better" | "Consider adding caching here" |
| Style | "This should look different" | "Rename this variable to be more descriptive" |
| Question | "I don't understand" | "Why was this approach chosen over X?" |
| Architecture | "This should be structured differently" | "Extract this into a separate service" |
Note the severity if provided. Adversarial review findings use the CCC severity format:
| Severity | Meaning | Default Action |
|---|---|---|
| P1 — Critical | Blocks acceptance. The implementation does not meet the spec. | Must fix before merge |
| P2 — Important | Significant issue but doesn't block the core acceptance criteria. | Should fix, justify if not |
| P3 — Consider | Nice-to-have improvement. The implementation is correct without it. | Evaluate ROI, may defer |
External review comments (GitHub PRs, etc.) won't use this format — assign a severity during the EVALUATE stage.
Goal: Translate the reviewer's comment into a concrete, actionable statement.
Process:
Restate the feedback in your own words. If you can't restate it, you don't understand it — ask for clarification.
Identify the concrete change the reviewer is suggesting. "This could be better" is not actionable. "Extract the validation logic into a validateInput() function" is actionable.
Separate the observation from the suggestion. Sometimes the reviewer identifies a real problem but suggests the wrong fix. Evaluate both independently:
OBSERVATION: "The error handling here swallows exceptions silently"
SUGGESTION: "Add a global error handler"
The observation may be correct (spec says errors should be surfaced).
The suggestion may be out of scope (global error handler is a separate feature).
Goal: Determine whether the feedback relates to the active spec's acceptance criteria.
Process:
Read the active spec. Open the PR/FAQ or issue description linked in .ccc-state.json. Load the acceptance criteria checklist.
Classify the feedback:
| Classification | Definition | Action |
|---|---|---|
| IN-SCOPE | Feedback directly relates to an acceptance criterion | Proceed to EVALUATE |
| SPEC-GAP | Feedback reveals a legitimate gap in the spec — the spec should have covered this but doesn't | Flag for spec update |
| DRIFT | Feedback would move the implementation beyond what the spec authorizes | Pushback with spec reference |
| UNRELATED | Feedback is about code not covered by this spec (e.g., adjacent module) | Defer to a new issue |
For each classification, cite the evidence:
FEEDBACK: "Add rate limiting to this endpoint"
CLASSIFICATION: DRIFT
EVIDENCE: Spec AC does not mention rate limiting. No acceptance criterion
requires it. AC #1-#5 are all functional requirements for the search feature.
Rate limiting is a cross-cutting concern that belongs in a separate spec.
Goal: For IN-SCOPE and SPEC-GAP feedback, determine priority and implementation effort.
Process:
Assign severity (if not already provided by the reviewer):
Estimate effort: How much work is the fix?
Apply the decision matrix:
| Severity | Effort: Trivial | Effort: Small | Effort: Medium | Effort: Large |
|---|---|---|---|---|
| P1 | Fix immediately | Fix immediately | Fix immediately | Fix (schedule if needed) |
| P2 | Fix immediately | Fix in this PR | Discuss with reviewer | Create follow-up issue |
| P3 | Fix if convenient | Create follow-up issue | Create follow-up issue | Defer |
For DRIFT items: Do not evaluate effort. Drift items are rejected with a spec reference, not deferred for later.
For SPEC-GAP items: Propose a spec update before implementing. The spec update must be approved before the implementation changes.
Goal: Acknowledge every piece of feedback with a clear response.
Response templates by classification:
IN-SCOPE — Will Fix:
Agreed — this conflicts with AC #[N] ("[criterion text]"). Fixing in [commit/next push].
IN-SCOPE — Will Defer:
Valid point. This is P3 and medium effort — creating [ISSUE-ID] to address in a
follow-up. Not blocking this PR as AC #[N] is satisfied by the current implementation.
SPEC-GAP — Proposing Update:
Good catch — the spec doesn't cover [scenario]. Proposing an AC addition:
"[new criterion text]". If approved, I'll implement in this PR / a follow-up.
DRIFT — Pushback:
This would add functionality beyond the current spec scope. The active spec
(AC #1-#[N]) covers [scope summary]. [Suggestion] is a valuable enhancement
that should be specced separately. Created [ISSUE-ID] to track it.
Happy to discuss if you believe this should be in-scope for this spec.
UNRELATED — Redirect:
This relates to [module/feature] which is outside the scope of [ISSUE-ID].
Created [NEW-ISSUE-ID] to address it properly.
Question — Clarify:
[Direct answer to the question with reference to spec or design decision]
Every feedback item gets a response. Unreplied feedback signals that it was ignored, not that it was evaluated and deferred.
Goal: Implement fixes in priority order, maintaining spec alignment throughout.
Process:
Fix P1s first. Critical findings block merge — address them before anything else.
Batch P2 fixes by area. If multiple P2 findings affect the same file/function, fix them together to avoid repeated context switches.
Run the test suite after each fix batch. Ensure no regressions. If a fix breaks a test, the fix is wrong or the test needs updating — enter the debugging-methodology loop.
Update the coverage table. If using TDD enforcement, new fixes may require new tests (RED → GREEN → REFACTOR for each fix).
Respond to the reviewer on each resolved item with the commit SHA or file:line reference.
For SPEC-GAP approved updates: Update the spec (PR/FAQ or issue description) before implementing. The spec drives the implementation, not the reverse.
When feedback comes from the CCC adversarial review process (via /ccc:review or the reviewer agents), additional structure is available:
Adversarial review produces findings in this format:
### [SEVERITY] [Finding Title]
**Category:** [Security | Performance | Architecture | UX | Correctness]
**Spec Reference:** [AC #N or "none"]
[Description of the finding]
**Recommendation:** [Specific suggested fix]
When the full 4-persona debate runs, findings are pre-classified:
When two reviewers suggest opposite changes (e.g., Performance says "add caching" but Security says "caching creates staleness risk"):
{
"current_task": "CIA-123",
"execution_mode": "tdd",
"review_state": {
"source": "adversarial-review",
"findings_total": 8,
"findings_resolved": 5,
"findings_deferred": 1,
"findings_rejected": 2,
"pending": [
{
"id": "F3",
"severity": "P2",
"classification": "IN-SCOPE",
"status": "implementing"
}
]
}
}
At the end of a session with unresolved review feedback:
.ccc-state.json so the next session can continue.Symptom: Every reviewer suggestion gets implemented immediately. The PR grows by 40% in the review cycle.
Fix: VERIFY-AGAINST-SPEC before EVALUATE. Drift items and P3 items get deferred, not implemented. The PR scope is defined by the spec, not by the review.
Symptom: Every reviewer suggestion is pushed back as "out of scope." The reviewer feels unheard.
Fix: Distinguish between DRIFT (genuinely out of scope) and SPEC-GAP (the spec should have covered this). Spec gaps are the spec's fault, not the reviewer's. Acknowledge the gap even as you defer the fix.
Symptom: Creating follow-up issues but not telling the reviewer. The reviewer checks back and finds their feedback "unresolved."
Fix: Every feedback item gets an explicit response. Deferred items include the follow-up issue ID. Rejected items include the spec reference justifying rejection.
Symptom: Major architectural changes suggested in review get implemented without spec update. The implementation drifts from the spec because "the reviewer said to."
Fix: Reviewer authority does not override spec authority. Architectural suggestions that change the design go through the spec amendment process: propose the change, get approval, update the spec, then implement.
Symptom: Treating every finding as P1. Everything is "critical," so nothing is prioritized.
Fix: Use the severity definitions strictly. P1 = blocks acceptance criterion. If the current implementation satisfies the acceptance criterion, the finding is P2 at most. Reserve P1 for genuine spec violations.
Symptom: 15 P3 findings each take "just 5 minutes." Total: 75 minutes of unplanned work on nice-to-haves.
Fix: Apply the effort threshold. P3 + Small effort = follow-up issue. P3 + Trivial = fix if convenient (defined as: requires no additional test, no additional file, no architectural change). "Convenient" does not mean "I could do it."
No implementation before VERIFY-AGAINST-SPEC. The natural instinct is to start fixing immediately. Resist. Classification first.
Drift rejection requires a spec reference. Do not push back with "I don't think this is in scope." Push back with "AC #1-#5 cover [summary]; this suggestion adds [X] which is not in any AC."
Every deferred item becomes an issue. If feedback is valid but deferred, it must exist as a tracked issue, not a mental note. If it's worth deferring, it's worth tracking.
Spec gaps are not the reviewer's problem. When the reviewer finds a legitimate gap, thank them. The gap is the spec's fault. Propose the update promptly.
Contradictions block implementation. When findings conflict, the resolution is a decision, not a code change. Escalate contradictions to the human.
After the RUVERI protocol completes and the human has filled in the Decision column on the RDR, convert findings into actionable Linear work items for agent dispatch. This bridges the gap between "review is done" and "implementation starts."
| Decision | Effort | Action |
|---|---|---|
| Agreed | Trivial/Small | Create sub-issue under reviewed issue. Label type:chore, estimate 1-2pt. Delegate to Factory via delegateId or assign directly. |
| Agreed | Medium+ | Create sub-issue. Include in next dispatch batch (see parallel-dispatch/SKILL.md Section 4). Assign to Claude Code session or Factory. |
| Deferred | Any | Create sub-issue, label type:chore, move to Backlog. Link back to RDR comment. |
| Rejected | — | No sub-issue. Document rejection reason in the RDR. |
| Escalated | — | No sub-issue. Flag for human decision. Do not create work items for unresolved escalations. |
Each finding sub-issue should include:
Review finding {ID}: {Short description} (e.g., "Review finding C1: Add input validation for config schema")type:chore, severity as priority (P1 → Urgent, P2 → High, P3 → Low)For findings delegated to agents via Linear comments:
For findings requiring a Claude Code session:
parallel-dispatch/SKILL.md)When an RDR produces 3+ agreed findings:
planning-preflight/references/plan-format.md)The reviewed issue's RDR comment should be updated with sub-issue links as findings are converted:
| Finding | Severity | Decision | Sub-Issue | Status |
|---------|----------|----------|-----------|--------|
| C1 | P1 | Agreed | [CIA-YYY](url) | Done |
| I1 | P2 | Agreed | [CIA-ZZZ](url) | In Progress |
| I2 | P2 | Deferred | [CIA-AAA](url) | Backlog |
| C2 | P3 | Rejected | — | — |
When all Agreed sub-issues are Done, the review cycle is complete and the parent issue can proceed to merge/closure.