From assess-rfe
Assess RFEs against quality criteria. Pass a Jira issue key, file path, URL, raw text, or wildcard for bulk.
npx claudepluginhub opendatahub-io/skills-registry --plugin assess-rfeThis skill is limited to using the following tools:
```
Fetches JIRA issues by key or search, distills into structured tasks, analyzes codebase for gaps and risks, optionally enriches JIRA or spawns subtasks. Use before coding workflows.
Triages JIRA bugs and stories against repo code to classify AI fixability: AI-Fixable, Needs Human, or Needs Info. For reviewing backlogs to identify agent-fixable issues.
Dispatches parallel specialist agents to review code diffs, PRDs, or plans before PRs or as quality gates. Invoked via /review, code-review agent, or team-dev in swarm tiers.
Share bugs, ideas, or general feedback.
/assess-rfe RHAIRFE-1234
/assess-rfe PROJ-99
/assess-rfe /path/to/document.md
/assess-rfe https://some-url
/assess-rfe <paste raw text>
/assess-rfe RHAIRFE-*
When this skill is invoked, resolve the absolute path of the plugin root directory. This SKILL.md is at <plugin_root>/skills/assess-rfe/SKILL.md — the plugin root is two levels up. Determine this path once at the start and use it for all script and file references. Store it as {PLUGIN_ROOT} for substitution into commands and agent prompts.
python3 {PLUGIN_ROOT}/scripts/setup_run.py RHAIRFE). Do not use |, &&, ;, 2>/dev/null, or redirects. The Bash tool returns command output as a string — parse it programmatically in your logic, not with sed/awk/wc/grep pipelines.Single-input mode handles any source (Jira key via MCP, file, URL, or raw text). Bulk mode fetches all issues upfront via scripts/dump_jira.py, then agents score from local files. Results are saved as individual files in a timestamped run directory.
assessments/RHAIRFE/ # in the project directory (persistent)
20260322-143000/ # timestamped run
RHAIRFE-42.result.md
queue.txt # pending keys (managed by next_batch.py)
scores.csv # generated by parse_results.py when complete
current -> 20260322-143000 # symlink to active/latest run
/tmp/rfe-assess/RHAIRFE/ # fetched issues (transient cache)
RHAIRFE-42.md
/tmp/rfe-assess/single/ # single-mode temp files
RHAIRFE-1234.md
Detect the input type:
[A-Z]+-\d+): Try MCP first, then fall back to the REST API:
mcp__atlassian__getJiraIssue with the key and cloudId="https://redhat.atlassian.net". If the call succeeds, extract the summary and description.python3 {PLUGIN_ROOT}/scripts/fetch_single.py {KEY}. This requires JIRA_SERVER (or JIRA_URL/JIRA_BASE_URL), JIRA_USER (or JIRA_EMAIL), and JIRA_TOKEN (or JIRA_API_TOKEN) environment variables. The script fetches the issue, converts ADF to markdown, and writes it directly to /tmp/rfe-assess/single/{KEY}.md. Parse its output for ENV_OK=false / ENV_MISSING=... — if env vars are missing, prompt the user to set them (same guidance as Phase 0 of bulk mode). If the script succeeds, skip the Write step below since the script already wrote the file./ or ./ or ~, or exists on disk): Read the file contents.http:// or https://): Fetch the content.Then assess:
python3 {PLUGIN_ROOT}/scripts/prep_single.py {KEY} to clean up stale files and ensure the output directory exists. This removes any previous .md and .result.md for the key so Write sees them as new files./tmp/rfe-assess/single/{KEY}.md using the same # KEY: Title format as the cache files. For non-Jira inputs, use a descriptive key (e.g., filename or INPUT). This is a separate directory from the bulk cache — never write single-mode files into /tmp/rfe-assess/RHAIRFE/ as that would clobber cached bulk data. Note: If the REST API fallback (fetch_single.py) was used, the file is already written — skip this step.{DATA_FILE} set to /tmp/rfe-assess/single/{KEY}.md and {RUN_DIR} set to /tmp/rfe-assess/single./tmp/rfe-assess/single/{KEY}.result.md, wrap it with a header, and present it to the user.RHAIRFE-*)Phase 0: Preflight checks.
python3 {PLUGIN_ROOT}/scripts/preflight.py RHAIRFE to check environment variables and current run state. Parse the output:
ENV_OK=true/false and ENV_MISSING=... — if env vars are missing, prompt the user:
JIRA_SERVER (or JIRA_URL or JIRA_BASE_URL): The Jira instance URL (e.g., https://redhat.atlassian.net)JIRA_USER (or JIRA_EMAIL): Their Jira email addressJIRA_TOKEN (or JIRA_API_TOKEN): A Jira API token (created at https://id.atlassian.com/manage-profile/security/api-tokens)! export JIRA_SERVER=... JIRA_USER=... JIRA_TOKEN=... in the prompt, or add them to their shell profile for persistence. The alternative names JIRA_EMAIL and JIRA_API_TOKEN are also accepted.CACHE_COUNT=N — number of cached issues (0 means dump_jira.py hasn't been run yet)CURRENT_RUN=path/none, CURRENT_ASSESSED=N, CURRENT_COMPLETE=true/false — existing run stateCURRENT_COMPLETE=false), inform the user it will be resumed.Phase 1: Fetch all issues to local files.
python3 {PLUGIN_ROOT}/scripts/dump_jira.py RHAIRFE to fetch every issue in the project via the Jira REST API. This writes one file per issue to /tmp/rfe-assess/RHAIRFE/ (e.g., RHAIRFE-42.md). The script renders Jira's ADF content as proper markdown, preserving headings, lists, tables, links, and emphasis.Phase 1.5: Set up run directory.
python3 {PLUGIN_ROOT}/scripts/setup_run.py RHAIRFE (add --limit N if the user requested a subset).current symlink, scores.csv presence, creating timestamped directories, updating symlinks) and outputs:
RUN_DIR=<path> — the absolute path to use for this runPENDING=<count> — number of issues to assessQUEUE_FILE=<path> — path to the queue file containing all pending keys (one per line){RUN_DIR} and {PENDING} count. Do NOT try to memorize or generate the key list yourself — the queue file is the single source of truth for which keys to process.Phase 2: Assess with a pipeline of 30 concurrent agents.
next_batch.py to get keys from the queue. Never generate key sequences yourself (e.g., "RHAIRFE-1 through RHAIRFE-30") — always get keys from the script to avoid assessing non-existent issues.BATCH_SIZE=0 (queue empty):
python3 {PLUGIN_ROOT}/scripts/next_batch.py {RUN_DIR} --batch-size 30 to pop the next batch of keys. Parse the output:
BATCH_SIZE=N — number of keys in this batch (0 = queue exhausted)REMAINING=N — keys still in queue after this batch--- separatorYou are an RFE quality assessor. Your task:
1. Read `{PROMPT_PATH}` for the full scoring rubric.
2. Follow its instructions exactly, substituting {KEY} for the issue key and {RUN_DIR} for the run directory. Read the data file from {DATA_FILE} (not the path in the rubric's step 1).
Issue key: {KEY}
Data file: {DATA_FILE}
Run directory: {RUN_DIR}
The coordinator MUST substitute all placeholders with actual values before passing this prompt to the agent:
{PROMPT_PATH} → absolute path of {PLUGIN_ROOT}/scripts/agent_prompt.md{DATA_FILE} → for bulk: /tmp/rfe-assess/RHAIRFE/{KEY}.md, for single: /tmp/rfe-assess/single/{KEY}.md{KEY} and {RUN_DIR} → actual values
This ensures every agent reads the identical rubric from the single source of truth — no drift from coordinator paraphrasing.python3 {PLUGIN_ROOT}/scripts/check_progress.py {RUN_DIR} to get COMPLETED=N, TOTAL=N, REMAINING=N. Never use shell pipes (ls | wc -l) or text-processing commands (sed, awk, grep) to check progress — use this script or the Glob tool instead.Phase 3: Generate CSV and present results.
python3 {PLUGIN_ROOT}/scripts/parse_results.py {RUN_DIR} to parse all .result.md files and generate {RUN_DIR}/scores.csv. The presence of scores.csv marks the run as complete.python3 {PLUGIN_ROOT}/scripts/summarize_run.py {RUN_DIR} to produce the full summary analysis (pass/fail counts, score distribution, criteria averages, zero-score counts, what-if analysis, near-miss failures). Present the output to the user.The full agent prompt is stored in {PLUGIN_ROOT}/scripts/agent_prompt.md. This is the single source of truth for the scoring rubric, calibration examples, and output format.
{DATA_FILE} set to /tmp/rfe-assess/single/{KEY}.md and {RUN_DIR} set to /tmp/rfe-assess/single. The agent writes its result there just like bulk agents.Single issue — wrap agent output with a header:
## RFE Assessment: RHAIRFE-1234
[agent output]
Bulk — after Phase 3, present the summary analysis from the CSV to the user. Include:
| Script | Purpose |
|---|---|
dump_jira.py | Fetches all issues from a Jira project via REST API v3, converts ADF to markdown, writes to /tmp/rfe-assess/<PROJECT>/ |
preflight.py | Checks env vars, cache state, and current run status |
setup_run.py | Creates timestamped run directory with resume support (detects incomplete runs via current symlink) |
agent_prompt.md | Full scoring rubric and instructions for assessment agents — use verbatim |
next_batch.py | Pops the next N keys from the queue file; ensures each key is processed exactly once |
check_progress.py | Reports completed vs total issues for a run directory |
parse_results.py | Extracts scores from .result.md files into scores.csv; handles format variants |
fetch_single.py | Fetches a single Jira issue via REST API v3 (fallback for when MCP is unavailable), writes to /tmp/rfe-assess/single/ |
prep_single.py | Cleans up stale data/result files for a key in /tmp/rfe-assess/single/ before a single-mode run |
summarize_run.py | Produces summary analysis from scores.csv: pass/fail rates, criteria averages, what-if analysis, near-misses |
Add to your user or project .claude/settings.json:
{
"permissions": {
"allow": [
"Bash(python3 <PLUGIN_PATH>/scripts/preflight.py:*)",
"Bash(python3 <PLUGIN_PATH>/scripts/dump_jira.py:*)",
"Bash(python3 <PLUGIN_PATH>/scripts/setup_run.py:*)",
"Bash(python3 <PLUGIN_PATH>/scripts/next_batch.py:*)",
"Bash(python3 <PLUGIN_PATH>/scripts/check_progress.py:*)",
"Bash(python3 <PLUGIN_PATH>/scripts/parse_results.py:*)",
"Bash(python3 <PLUGIN_PATH>/scripts/summarize_run.py:*)",
"Bash(python3 <PLUGIN_PATH>/scripts/export_rubric.py:*)",
"Bash(python3 <PLUGIN_PATH>/scripts/fetch_single.py:*)",
"Bash(python3 <PLUGIN_PATH>/scripts/prep_single.py:*)",
"Bash(mkdir:*)",
"Bash(ls:*)",
"mcp__atlassian__getJiraIssue",
"mcp__atlassian__searchJiraIssuesUsingJql"
],
"additionalDirectories": [
"/tmp/rfe-assess",
"<PLUGIN_PATH>"
]
}
}
<PLUGIN_PATH> is a placeholder — manually replace it with the absolute path to this plugin (e.g., /Users/you/devel/assess-rfe) before adding to your settings. The additionalDirectories entries allow agents to read the scoring rubric from the plugin directory and read/write cached issues and results in /tmp/rfe-assess/.