From sentry-skills
Iterates on GitHub PRs until CI passes by fetching check failures/logs, categorizing review feedback via LOGAF scale, and monitoring status. Automates fix-push-wait for CI and reviews.
npx claudepluginhub joshuarweaver/cascade-code-devops-misc-1 --plugin getsentry-skillsThis skill uses the workspace's default tool permissions.
Continuously iterate on the current branch until all CI checks pass and review feedback is addressed.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Continuously iterate on the current branch until all CI checks pass and review feedback is addressed.
Requires: GitHub CLI (gh) authenticated.
Requires: The uv CLI for python package management, install guide at https://docs.astral.sh/uv/getting-started/installation/
Important: All scripts must be run from the repository root directory (where .git is located), not from the skill directory. Use the full path to the script via ${CLAUDE_SKILL_ROOT}.
scripts/fetch_pr_checks.pyFetches CI check status and extracts failure snippets from logs.
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py [--pr NUMBER]
Returns JSON:
{
"pr": {"number": 123, "branch": "feat/foo"},
"summary": {"total": 5, "passed": 3, "failed": 2, "pending": 0},
"checks": [
{"name": "tests", "status": "fail", "log_snippet": "...", "run_id": 123},
{"name": "lint", "status": "pass"}
]
}
scripts/fetch_pr_feedback.pyFetches and categorizes PR review feedback using the LOGAF scale.
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py [--pr NUMBER]
Returns JSON with feedback categorized as:
high - Must address before merge (h:, blocker, changes requested)medium - Should address (m:, standard feedback)low - Optional (l:, nit, style, suggestion)bot - Informational automated comments (Codecov, Dependabot, etc.)resolved - Already resolved threadsReview bot feedback (from Sentry, Warden, Cursor, Bugbot, CodeQL, etc.) appears in high/medium/low with review_bot: true — it is NOT placed in the bot bucket.
scripts/monitor_pr_checks.pyMonitors PR checks until they all reach a terminal state. Retries transient gh failures, treats skipping and cancel as terminal states, and waits for checks to register after a fresh push instead of exiting early.
uv run ${CLAUDE_SKILL_ROOT}/scripts/monitor_pr_checks.py [--pr NUMBER]
Prints one terminal marker followed by a tab-separated check summary:
ALL_CHECKS_PASSEDCHECKS_DONE_WITH_FAILURESgh pr view --json number,url,headRefName
Stop if no PR exists for the current branch.
Run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py to get categorized feedback already posted on the PR.
Auto-fix (no prompt):
high - must address (blockers, security, changes requested)medium - should address (standard feedback)When fixing feedback:
This includes review bot feedback (items with review_bot: true). Treat it the same as human feedback:
Prompt user for selection:
low - present numbered list and ask which to address:Found 3 low-priority suggestions:
1. [l] "Consider renaming this variable" - @reviewer in api.py:42
2. [nit] "Could use a list comprehension" - @reviewer in utils.py:18
3. [style] "Add a docstring" - @reviewer in models.py:55
Which would you like to address? (e.g., "1,3" or "all" or "none")
Skip silently:
resolved threadsbot comments (informational only — Codecov, Dependabot, etc.)Run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py to get structured failure data.
Wait if pending: If review bot checks (sentry, warden, cursor, bugbot, seer, codeql) are still running, wait before proceeding—they post actionable feedback that must be evaluated. Informational bots (codecov) are not worth waiting for.
Investigation is mandatory before any fix. Do not guess, assume, or infer the cause from the check name or a surface-level reading of the error. You must trace the failure to its root cause in the actual code.
For each failure:
gh run view <run-id> --log-failed if the snippet is truncated or ambiguous. Identify the exact failing assertion, exception, or lint rule.Before committing, verify your fixes locally:
If local verification fails, fix before proceeding — do not push known-broken code.
git add <files>
git commit -m "fix: <descriptive message>"
git push
Keep monitoring CI status and review feedback in a loop instead of blocking:
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py to get current CI statusuv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py for new review feedback
b. Address any new high/medium feedback immediately (same as step 3)
c. If changes were needed, commit and push (this restarts CI), then continue monitoring from the refreshed branch state
d. Sleep 30 seconds (don't increase on subsequent iterations), then repeat from sub-step 1uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py. Address any new high/medium feedback — if changes are needed, return to step 6.If you're in Claude Code, you can replace the sleep-based wait above with MonitorTool so the polling happens in the background instead of consuming context. This is a Claude-only optimization, not the default workflow for other agents.
Run the bundled monitor script through MonitorTool with persistent: false:
uv run ${CLAUDE_SKILL_ROOT}/scripts/monitor_pr_checks.py
Set timeout_ms to match the repository's normal CI duration instead of hardcoding a 15-minute timeout.
After MonitorTool reports completion, re-run uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py:
If you pushed new changes while monitoring, start a fresh monitor so it watches the new set of CI runs.
If step 7 required code changes (from new feedback after CI passed), return to step 2 for a fresh cycle. CI failures during monitoring are already handled within step 7's polling loop.
Success: All checks pass, post-CI feedback re-check is clean (no new unaddressed high/medium feedback including review bot findings), user has decided on low-priority items.
Ask for help: Same failure after 2 attempts, feedback needs clarification, infrastructure issues.
Stop: No PR exists, branch needs rebase.
If scripts fail, use gh CLI directly:
gh pr checks name,state,bucket,linkgh run view <run-id> --log-failedgh api repos/{owner}/{repo}/pulls/{number}/comments