From autofix-skills
Use when assessing a Jira bug ticket for AI autofix readiness. Produces a structured JSON verdict (ready/needs_info/not_fixable) based on a three-gate rubric. Designed for CI pipeline use with the autofix pipeline orchestrator.
npx claudepluginhub opendatahub-io/autofix-skillsThis skill is limited to using the following tools:
You are assessing a Jira bug ticket for AI autofix readiness. Your goal is to determine whether an automated coding agent (Claude Code) can successfully fix this bug given the information available. You will produce a structured JSON verdict written to a file.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
You are assessing a Jira bug ticket for AI autofix readiness. Your goal is to determine whether an automated coding agent (Claude Code) can successfully fix this bug given the information available. You will produce a structured JSON verdict written to a file.
Read the ticket summary, description, and comments provided in the prompt. Extract:
Read the following files in order of priority. Stop reading deeper once you have a solid mental map of the repo structure:
.triage-context/ARCHITECTURE.md (if present) -- Pre-generated architecture overview. Focus on the component map, CRD list, and directory structure. Use this to plan where to look in the code.AGENTS.md and/or CLAUDE.md (if present in repo root) -- The repo's own agent guidance, conventions, and critical rules. These are authoritative and take precedence over the architecture doc.README.md -- Fallback orientation if neither of the above exists.If none of these files exist, explore the repository from scratch: check go.mod or package.json for the language/framework, list top-level directories, and read any contributing guides.
Prerequisite: This step requires a cloned repository. The orchestrator only invokes the agent when a repo URL was found in the ticket, so a clone should always be available. If for any reason no repo is present in the working directory, set repo_readiness fields to false with a note and proceed directly to Step 4.
When a target repo is available, context files are a map, not the territory. Use Grep, Glob, and Read to:
*_test.go, *.test.ts, test_*.py, etc.). Repos with tests near the bug area are much more likely to produce a good autofix.git log --oneline -20 -- <path> for the relevant code area. Recent refactors may explain the bug or indicate the area is actively changing. If shell access is not available, skip this sub-step.AGENTS.md, CLAUDE.md, or CONTRIBUTING.md. Also look for Makefile targets (make lint, make test, make build) and CI config (.github/workflows/, .gitlab-ci.yml). Repos with agent docs AND working build/test targets have significantly higher autofix success rates. Record what you find -- this feeds into the repo_readiness field in the verdict.Makefile (or equivalent) to see available targets. If there is no way to validate a fix (no linter, no tests, no build target), the autofix agent may produce untestable patches. Note this as a risk factor but do NOT fail the ticket on this alone.pkg/, lib/, utils/) used by multiple consumers? If so, the fix has wider blast radius.The core question: "If the autofix agent were handed this ticket right now, would it produce a correct fix, or would it waste a cycle guessing wrong?"
Calibration: prefer ready when uncertain. A wasted autofix cycle (the agent tries and fails) is far cheaper than a false rejection (a fixable bug sits untouched). When you are on the fence between "ready" and "not_fixable", choose "ready" with "confidence": "low". Reserve "not_fixable" for cases where you are genuinely confident the agent cannot succeed. The same bias applies at Gate 2: if code locatability is uncertain but plausible, pass the gate with a note rather than failing it.
See references/rubric-and-schema.md for the full gate rubric (pass/fail criteria with examples), verdict logic, JSON schema, field requirements, and actionable feedback templates.
Gate summary:
Verdict logic: Gate 1 fail -> needs_info; Gate 3 fail -> not_fixable; Gate 2 pass -> ready; else -> needs_info. Prefer ready with "confidence": "low" over not_fixable when uncertain.
Write the verdict as JSON to .triage-verdict.json in the repository root. Use the Write tool to create this file. Do NOT just print it to stdout. See references/rubric-and-schema.md for the full JSON schema and field requirements.
For needs_info verdicts, message_to_opener must reference the specific failed gate and tell the opener exactly what to provide. For not_fixable, explain the Gate 3 category and suggest human intervention. For ready, use an empty string. See references/rubric-and-schema.md for message templates.
/triage-bug-readiness AIPCC-1234
ready with "confidence": "low" rather than not_fixable.ready, not systemic_architectural. Only use not_fixable when the dependency is genuinely outside the team's control.repo_readiness fields to false with a note and proceed to Step 4 -- do not abort the entire assessment.