From odh-ai-helpers
Triages JIRA bugs and stories against repo code to classify AI fixability: AI-Fixable, Needs Human, or Needs Info. For reviewing backlogs to identify agent-fixable issues.
npx claudepluginhub jeremyeder/ai-helpers-fixed --plugin odh-ai-helpersThis skill is limited to using the following tools:
Triage JIRA bugs from a project backlog against a loaded repository to determine which bugs an AI agent can fix. Produces a focused fixability report.
Triages JIRA bugs and stories against repo code to classify AI fixability: AI-Fixable, Needs Human, or Needs Info. For reviewing backlogs to identify agent-fixable issues.
Triages bug reports and error messages by searching Jira for duplicates, checking fix history, and creating structured issues or adding comments.
Fetches JIRA issue by key or search, distills title/description/acceptance criteria/comments into structured task, analyzes codebase for gaps/risks, optionally enriches JIRA.
Share bugs, ideas, or general feedback.
Triage JIRA bugs from a project backlog against a loaded repository to determine which bugs an AI agent can fix. Produces a focused fixability report.
This skill answers one question: can an AI agent fix this bug in this repo? It classifies issues as AI-Fixable, Needs Human, or Needs Info based on a fixability rubric.
"Bug" is used broadly here — analyze Bugs and Stories for fixability. Skip Epics, Initiatives, and Features as they are too high-level for a single code fix (note them as skipped in the report).
Out of scope:
getAccessibleAtlassianResources, searchJiraIssuesUsingJql, getJiraIssue, editJiraIssue, addCommentToJiraIssue)User: Triage <PROJECT> bugs against this repo
User: Triage bugs from filter=<ID>
User: Triage project = <PROJECT> AND component = "<component>" AND status = New
User: Triage filter=<ID> and update JIRA with labels
User: Triage just <KEY>
Query:
filter=<ID> as the JQLproject = <PROJECT> AND component = "<component>" AND type in (Bug, Story) AND status in (New, Refinement, "To Do") AND assignee is EMPTY ORDER BY priority DESCTarget repo:
<repo> loaded — should I triage bugs against this repo?"Repo state (read-only by default):
Triage is read-only — never switch branches or modify files. Run git fetch origin and compare HEAD with origin/main. If behind, inform the user but proceed on the current HEAD. Only create a temporary branch (git checkout -b ai-bug-fix-triage-<date> origin/main) if the user explicitly asks. State which commit is being used.
Use the Atlassian MCP searchJiraIssuesUsingJql tool. Discover cloudId at runtime via getAccessibleAtlassianResources. Never hardcode cloud ID or site URL in output, comments, or reports.
Tool: searchJiraIssuesUsingJql
Arguments:
cloudId: <from getAccessibleAtlassianResources — never hardcode>
jql: <constructed query>
maxResults: 50
responseContentFormat: "markdown"
fields: ["summary", "description", "status", "issuetype", "priority",
"labels", "components", "assignee", "created", "updated"]
Fast path vs. interactive path:
If the user provides a complete request (query + repo + exclusion criteria), proceed directly without stopping to ask questions. Only present scope options when genuinely needed:
nextPageToken for iteration resumeProcessed 50/149+ | Relevant: 5 | AI-Fixable: 2 | Needs Human: 2 | Needs Info: 1 | Not Relevant: 45
Yield: 10% relevant | Continue? (next: 51-100) | "write back" | "stop"
When yield drops to zero for a full batch, recommend stopping. If the Not Relevant rate exceeds 80% in the first batch, suggest the user narrow their JQL (e.g., add keywords, restrict issue types, or use a more specific component) before continuing — this avoids burning through batches with low signal.
Before analysis, filter out tickets that are managed elsewhere, irrelevant to the target repo, or already triaged.
3a. Label-based relevance hints (soft signals)
Scan each ticket's labels field before reading descriptions. Labels are soft signals, not definitive — always corroborate with the summary line before classifying as Not Relevant. If there is any doubt, read the description.
| Label pattern | Signal | Action |
|---|---|---|
Automation labels (e.g., auto-created, nightly-build) | Auto-generated by CI/bots | Group together; check which repo's system failed |
| Repo or project name labels | Likely belongs to that repo | Confirm with summary before skipping |
| Technology/platform labels | Platform-specific scope | Narrow relevance, but target repo may still own the fix |
| Process/workflow labels | Organizational context | Check if they match the target repo's domain |
3b. Skip already-triaged bugs (idempotency)
Check each bug's labels for an existing classification label (ai-fixable, ai-nonfixable, ai-needs-info):
ai-fixable or ai-nonfixable: Already triaged — skip by default.ai-needs-info: Always re-evaluate. Fetch full issue details via getJiraIssue to access comments and their timestamps. Check for replies posted after the original triage comment. If new information was provided, re-run the fixability analysis and update the classification. If no new info, skip.Report: Skipped: N tickets already classified (M ai-needs-info re-evaluated after new comments).
If the user explicitly requests re-triage (e.g., "re-triage everything"), process all tickets regardless of existing labels. Compare the new classification to the old and note any changes.
3c. Ask about additional exclusion criteria rather than assuming. Present observed patterns:
I see the following patterns in the results:
- N tickets with labels [auto-created, nightly-build] — are these handled by automation?
- N tickets already assigned to someone — include or skip?
- N tickets with status "Closed" or "Done" — skip these?
Which of these should I exclude?
If the user provides exclusion criteria upfront, apply them directly without asking.
In the report, summarize filters in one line:
Excluded: 12 tickets (auto-created nightly failures per user request)
Skipped: 8 tickets (6 already classified; 2 ai-needs-info re-evaluated after new comments)
For each bug remaining after filtering:
4a. Read and understand the bug
Extract: summary, full description, error messages/stack traces, affected files or components, repro steps, workarounds.
4b. Check relevance to the target repo
Check in this order (cheapest signals first):
[org/repo-name] summary prefixes identify ownership quickly; treat as strong hints, confirm before skipping (see Step 3a)If the bug is clearly about a different repo or system, classify as Not Relevant with a brief note about the likely owner and move on. Do not search the codebase for bugs that aren't about this repo.
4c. Search the target repo for related code
Only for bugs that pass the relevance check:
AGENTS.md or CLAUDE.md for repo structure guidance4d. Apply the fixability rubric
| Question | What to look for |
|---|---|
| Root cause identifiable in this repo? | Specific file, function, or config where the problem originates? If external system, upstream dep, or infrastructure — No. |
| Clear code-level fix? | Can the fix be expressed as a code change (config edit, logic fix, version bump, script fix)? Or does it need hardware access, infrastructure changes, upstream patches, or cross-team decisions? |
| Fix verifiable? | Does the repo have tests, linting, or CI checks covering this area? If no automated checks exist, can correctness be confirmed by code review (e.g., config fixes, documentation, straightforward logic changes)? Check AGENTS.md for test/verify commands. |
4e. Classify
| Classification | Criteria | Label |
|---|---|---|
| AI-Fixable | All 3 = Yes | ai-fixable |
| Needs Human | Any = No | ai-nonfixable |
| Needs Info | Description too vague to determine root cause | ai-needs-info |
AI-Fixable — also provide:
Needs Human — briefly explain why (e.g., "requires upstream change", "needs infrastructure access", "cross-team architectural decision").
Needs Info — provide actionable questions that would unblock triage. Frame questions so anyone on the team can answer, not just the reporter. Examples:
Produce a markdown report with:
Header: Date, JQL used, target repo (commit/branch), counts (Fetched / Excluded / Skipped / Triaged).
Summary table: Counts by classification (AI-Fixable, Needs Human, Needs Info).
AI-Fixable Bugs table: Ordered by Priority > Confidence > Age. Columns: Key, Summary, Priority, Confidence, Fix Approach.
Detailed Analysis — one section per bug:
Iteration Status (if batching): Progress summary with cumulative counts, yield rate per batch, and actionable next steps (same format as Step 2). When yield drops to zero for a full batch, recommend stopping.
Only perform if the user explicitly asks to "write back", "update JIRA", or "add labels". Only write labels and comments to tickets classified as AI-Fixable, Needs Human, or Needs Info — never label tickets classified as Not Relevant. Those tickets belong to other repos and adding labels would pollute their triage state.
Idempotency check before writing:
ai-fixable, ai-nonfixable, or ai-needs-info), skip the label update.Add labels using editJiraIssue: preserve existing labels, append exactly one of ai-fixable, ai-nonfixable, or ai-needs-info. Do not add any additional meta-labels.
Add triage comment using addCommentToJiraIssue (markdown). Start with **AI Triage Assessment**, end with _Generated by ai-bug-fix-triage skill_. Content varies: