Manage the PR review iteration loop on GitHub: poll for reviewer feedback, assess suggestions with evidence, implement fixes, resolve threads, and drive CI/CD to green. Use after pushing a branch and creating a PR. Works standalone on any PR (bug fix, refactor, hotfix, feature). Spec-aware when a SPEC.md is provided, but does not require one. Triggers: review-cloud, review cloud, PR feedback, iterate on PR, resolve reviews, CI/CD resolution, green pipeline, address reviewer comments.
From engnpx claudepluginhub inkeep/team-skills --plugin engThis skill uses the workspace's default tool permissions.
references/review-protocol.mdscripts/fetch-pr-feedback.shscripts/investigate-ci-failures.shGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Configures VPN and dedicated connections like Direct Connect, ExpressRoute, Interconnect for secure on-premises to AWS, Azure, GCP, OCI hybrid networking.
Manage the full review iteration lifecycle for a PR: poll for feedback, assess each suggestion with evidence, implement fixes, resolve threads, and drive CI/CD to green.
You are a peer engineer evaluating suggestions — not a subordinate implementing directives. Reviewer comments are hypotheses about your code. Your job is to determine which are correct, with evidence, and act accordingly.
| Input | Required | Default | Description |
|---|---|---|---|
| PR number | No | Inferred from current branch via gh pr view | The pull request to manage. If not provided, detected from the current branch. Fails with a clear error if no PR exists for the branch. |
| Repo | No | Inferred from gh repo view / git remote | owner/repo format. |
| Test command(s) | No | pnpm test --run && pnpm typecheck && pnpm lint | Command(s) to run after implementing changes. Override if your project uses different tooling. |
| SPEC.md path | No | None | If provided, enables spec-aware review assessment (cross-referencing suggestions against design intent). |
Before starting any work, create a task for each stage using TaskCreate with addBlockedBy to enforce ordering. Derive descriptions and completion criteria from each stage's own workflow text.
Mark each task in_progress when starting and completed when its stage's exit criteria are met. On re-entry, check TaskList first and resume from the first non-completed task.
This skill has two stages. Complete Stage 1 before moving to Stage 2.
Reviewer feedback is the primary signal. CI/CD is secondary and opportunistic during this stage.
Verify gh CLI is available and authenticated:
gh auth status
If this fails, stop and tell the user — this skill requires the GitHub CLI.
If a PR number was not provided, detect it from the current branch:
# Infer PR number from current branch
gh pr view --json number -q '.number'
If no PR exists for the current branch, stop and tell the user — this skill requires an existing PR to iterate on.
Then determine what's already in flight before taking action:
Script path resolution: All scripts/ paths below are relative to this skill's base directory (shown in the skill header when loaded), not your current working directory. Resolve the full path before invoking. For example, if the skill base is ~/.claude/plugins/cache/.../skills/review-cloud, then scripts/fetch-pr-feedback.sh means ~/.claude/plugins/cache/.../skills/review-cloud/scripts/fetch-pr-feedback.sh. If the scripts are not found, fall back to the equivalent gh api commands in references/review-protocol.md.
# Check for unpushed local changes
git status
git log origin/HEAD..HEAD --oneline
# Check for existing review feedback (use full path to skill's scripts/ dir)
<skill-base>/scripts/fetch-pr-feedback.sh <pr-number>
| Starting state | Action |
|---|---|
| Local changes not yet pushed | Push, update PR description if needed, then proceed to step 2. |
| Already pushed, review feedback waiting | Skip push. Proceed directly to step 3 (assess feedback). |
| Already pushed, no feedback yet | Proceed to step 2 (poll). |
Merge conflict detected (CONFLICTING in merge status) | Rebase on the target branch (git fetch origin main && git rebase origin/main), resolve conflicts, verify tests pass, then push. Proceed after the PR is mergeable. |
When pushing: update the PR body if the implementation has changed materially. Load /pr skill to rewrite the PR body — it is a stateless snapshot of the PR's scope relative to origin/main, not a history of its evolution. Not every small fix requires a PR body edit.
Load: references/review-protocol.md (section: "PR body")
Wait approximately 6 minutes, then check for reviewer feedback. Opportunistically check CI/CD at the same time.
Always use the skill's scripts for polling — do not call gh pr checks or gh api directly. The scripts handle exit codes, parse output, and categorize results (passed/failed/pending). Raw gh pr checks returns exit code 8 for pending checks, which looks like an error but is normal — the scripts absorb this.
# Primary: fetch all review feedback (reviews, inline comments, discussion)
<skill-base>/scripts/fetch-pr-feedback.sh <pr-number> --reviews-only
# Secondary (opportunistic): check CI/CD status
<skill-base>/scripts/fetch-pr-feedback.sh <pr-number> --checks-only
# Or fetch everything at once:
<skill-base>/scripts/fetch-pr-feedback.sh <pr-number>
Fall back to raw gh pr checks only if the scripts are unavailable (file not found).
Decision logic:
| What's available | Action |
|---|---|
| Review feedback ready, CI/CD not done | Proceed to assess review feedback immediately. Do NOT wait for CI/CD. |
| Review feedback ready, CI/CD also ready | Assess both. Handle review feedback first, then CI/CD failures. |
| No review feedback, CI/CD has failures | Assess and fix CI/CD failures while waiting for review. |
| Neither ready | Wait another 3 minutes and check again. After 3 checks with no results, proceed with other work and check back later. |
Load: references/review-protocol.md (section: "Assessment protocol")
The full assessment protocol is in the reference file. The short version:
thread_id from fetch-pr-feedback.sh output with the resolveReviewThread GraphQL mutation (see references/review-protocol.md).Do not default to acceptance. The path of least resistance (just apply every suggestion) produces worse code than thoughtful evaluation. Equally, do not default to rejection — that wastes valid insights. Never resolve a thread by deferring to "future iterations" — you have no authority to commit to future work. Every suggestion gets a substantive conclusion: accept and implement, or decline with evidence.
For small changes (< 20 lines, single file, clear fix):
For medium changes (20-100 lines, 2-3 files):
For large/complex changes (> 100 lines, architectural, or touching many files):
references/review-protocol.md "Subagent delegation template").Before pushing, evaluate your own changes:
If you are not confident your fix fully and correctly addresses the reviewer's concern, say so in your reply rather than asserting completeness. "I've addressed this by doing X — please verify this matches your intent" is more useful than implying the matter is settled.
At minimum, run tests after any changes:
# Default — override via --test-cmd if your project uses different tooling
pnpm test --run
pnpm typecheck
pnpm lint
If changes affect user-facing behavior, also verify the experience manually (API calls, browser testing, etc.) as appropriate. When fixing a reviewer-flagged UI issue and /browser is available, load it to verify the fix: navigate to the affected page, confirm the visual change, and check for console errors (startConsoleCapture / getConsoleErrors). Capture a screenshot as evidence — this strengthens your reply when resolving the thread.
If your changes affect documented behavior — whether product-facing (user docs, API reference, guides) or internal (architecture docs, READMEs, runbooks) — update the relevant documentation files (.md, .mdx, etc.) alongside the code fix. Docs should stay accurate through the review loop, not deferred to later.
Push changes, then return to step 2. Continue the loop.
Not all review feedback is a small fix. If a reviewer surfaces something that requires substantial new work, classify and act:
| Feedback scope | Action |
|---|---|
| Bug fix or correctness issue in existing code | Fix directly in the review loop. |
| New functionality not in scope (scope expansion) | Pause. Consult the user — this is a product/scope decision. |
| Architectural rework of existing implementation | Evaluate proportionality: does the evidence warrant the complexity? If yes, implement directly or via subagent. If not, decline with reasoning. |
| Substantial additive work with clear acceptance criteria | If a workflow orchestrator (e.g., /ship) is driving this review, hand back to the orchestrator for a proper implementation pass. Otherwise, consult the user on how to proceed. |
Do not force substantial rework into the review loop's fix-and-push cycle. Larger changes need their own implementation and testing before they merge into the review flow.
Do not exit until all review feedback is resolved. After resolving threads, re-poll to confirm no new feedback appeared:
<skill-base>/scripts/fetch-pr-feedback.sh <pr-number> --reviews-only
Exit only when ALL of these are true:
If new feedback appears after a push, return to step 3 and assess it. Continue looping until the above conditions are met.
If the same feedback keeps recurring after 3 iterations, pause and consult the user.
Once Stage 1 is complete, proceed to Stage 2.
After the review loop is finalized, shift focus to monitoring and resolving CI/CD pipeline results.
<skill-base>/scripts/fetch-pr-feedback.sh <pr-number> --checks-only
Do NOT wait for the entire pipeline to finish. As soon as any check in a given run reports a failure or cancellation, investigate immediately:
# Always use --compare-main — classification without main branch evidence is guesswork
<skill-base>/scripts/investigate-ci-failures.sh <pr-number> --compare-main
Cancelled runs are not "passed." A run cancelled due to runner shutdown, timeout, or resource exhaustion is an unknown — you don't know if the code is clean. Re-trigger it and wait for the result.
Classification requires evidence. You must reach high confidence that a failure is NOT caused by this PR before classifying it as pre-existing or infrastructure. "It looks like infra" is not sufficient — verify with evidence (main branch comparison, retry results, log analysis).
For each failing or cancelled check, investigate then classify:
| Classification | Evidence required | Action |
|---|---|---|
| Caused by this PR's changes | Failure is in code paths touched by this PR; main branch passes the same check | Fix it. |
| Pre-existing failure (fails on main too) | --compare-main shows the same workflow failing on main with the same error | Note in PR comment with evidence (main run ID, matching error). Do not fix unless trivial. |
| Flaky test (passes on retry) | Same check passes on re-run without code changes | Re-trigger: gh run rerun <run-id> --repo <owner/repo>. Wait for the rerun. If it passes, note in PR comment. If it fails again, investigate deeper — two consecutive failures are not "flaky." |
| Infrastructure issue (timeout, runner shutdown, OIDC, resource) | Logs show infrastructure-level error (not test/build failure); main branch shows similar issues OR the error is clearly unrelated to code | Re-trigger: gh run rerun <run-id> --repo <owner/repo>. Wait for the rerun. Note in PR comment with evidence. |
| Cancelled (any reason) | Run was cancelled before completing | Re-trigger: gh run rerun <run-id> --repo <owner/repo>. Wait for the result. Do not classify a cancelled run as "passed" or "not PR-related." |
Re-trigger, don't just note. For flaky, infrastructure, and cancelled failures, always re-trigger the run and wait for the result. Noting a failure without retrying leaves an unknown — the pipeline is not green and you have no evidence it would be.
After fixing failures:
Stop when ALL of these are true:
If CI keeps failing on the same issue after 3 attempts, pause and consult the user.
After Stage 1 + Stage 2 complete, assess whether the PR warrants a second full review. Trigger a second pass when any of these apply:
To trigger the second pass, leave a PR comment requesting a full review, then loop back to Stage 1 Step 2 (poll for feedback). The same assessment and resolution process applies.
For straightforward PRs (single-file config changes, simple bug fixes, cosmetic updates), skip the second pass.
When both stages are complete (and second-pass review if triggered), report to the user (or the invoking skill):
Re-invocation guidance for callers: If new commits are pushed to the PR after this skill reports completion — from any source (caller fixes, user-requested changes, subsequent workflow phases) — the review cycle should be re-invoked. The review loop is complete only when the most recent push has been through a review cycle with all threads resolved.
| Path | Use when | Impact if skipped |
|---|---|---|
references/review-protocol.md | Assessing reviewer feedback (Stage 1, step 3); PR body guidance; subagent delegation | Mechanical/uncritical response to reviews; missed nuance in feedback assessment |
/pr skill | Writing or updating the PR body (Stage 1, step 1; after any material push) | Inconsistent PR body structure; missing sections; stale description |
scripts/fetch-pr-feedback.sh | Fetching review feedback and CI/CD status | Agent uses wrong/deprecated gh commands, misses inline review comments |
scripts/investigate-ci-failures.sh | Investigating CI/CD failures with logs (Stage 2) | Agent struggles to find run IDs, fetch logs, or compare with main |