From test-plan
Generate a test plan from a strategy (RHAISTRAT or RHOAIENG issue), with optional ADR for extra technical depth. Use when starting test planning for a new RHOAI feature with a defined Jira strategy.
npx claudepluginhub opendatahub-io/skills-registry --plugin test-plan<JIRA_KEY> [ADR_FILE_PATH]This skill uses the workspace's default tool permissions.
Generate a complete test plan for a RHOAI feature based on a refined strategy, and optionally an ADR document for additional technical depth.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Generate a complete test plan for a RHOAI feature based on a refined strategy, and optionally an ADR document for additional technical depth.
/test-plan-create <JIRA_KEY> [ADR_FILE_PATH]
Examples:
/test-plan-create RHAISTRAT-400 (preferred - formal strategy)/test-plan-create RHOAIENG-48676 (alternative - bug/epic/task)/test-plan-create RHAISTRAT-400 /path/to/adr.pdfParse $ARGUMENTS to extract:
RHAISTRAT-400) or RHOAIENG issue (e.g., RHOAIENG-48676)If no arguments are provided and a strategy file was just generated by /strat.create or /strat.refine in this session, use it automatically — proceed directly to Step 1.
If no arguments are provided and no strategy file is available from the current session, ask the user for:
RHAISTRAT-400) or RHOAIENG issue key (e.g., RHOAIENG-48676)mcp_catalog). If not provided, derive from the feature name.Install the test-plan package (makes all scripts importable):
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv sync --extra dev)
If installation fails, inform the user and do NOT proceed. Once installed, all Python scripts will work from any directory.
Verify that the Atlassian MCP integration is available by attempting to use the mcp__atlassian__getJiraIssue tool.
If the MCP tool is not available:
artifacts/strat-tasks/ as an alternativeIf the MCP tool is available, proceed to Step 0.3.
IMPORTANT: Test plan artifacts should NOT be created in the skill repository to avoid polluting the skill codebase.
Check for --output-dir flag in arguments:
FORCE_OUTPUT_DIR=trueIf no --output-dir flag, check for saved preference:
# Try to read saved preference from .claude/settings.json
saved_dir=$(jq -r '.["test-plan"]?.output_dir // empty' .claude/settings.json 2>/dev/null)
Ask user where to create artifacts via AskUserQuestion:
If saved preference exists:
Where should test plan artifacts be created?
Press Enter to use saved location:
<saved_dir>Or provide a different directory path:
If no saved preference:
Where should test plan artifacts be created?
Provide a directory path, or press Enter for:
~/Code/collection-tests
Parse user input:
~/Code/collection-tests) or saved preference~ to home directoryValidate against skill repository (unless FORCE_OUTPUT_DIR=true):
# Validate path is not in skill repo
export CLAUDE_SKILL_DIR
force_flag=$([ "$FORCE_OUTPUT_DIR" = "true" ] && echo "--force" || echo "")
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/repo.py validate-local-path "$target_dir" $force_flag) || exit 1
Ask to save preference via AskUserQuestion (unless using --output-dir):
Save this location for future /test-plan-create runs? [yes/no]
If yes: Save to .claude/settings.json:
# Create .claude directory if it doesn't exist
mkdir -p .claude
# Update or create settings.json with output_dir preference
if [ -f .claude/settings.json ]; then
jq '.["test-plan"].output_dir = "'"$target_dir"'"' .claude/settings.json > .claude/settings.json.tmp
mv .claude/settings.json.tmp .claude/settings.json
else
echo '{"test-plan": {"output_dir": "'"$target_dir"'"}}' > .claude/settings.json
fi
echo "✓ Saved preference to .claude/settings.json"
If no: Continue without saving (will ask again next time)
Create output directory if it doesn't exist:
mkdir -p "$target_dir"
cd "$target_dir"
echo "✓ Creating test plan artifacts in: $target_dir"
Store for session context:
/test-plan-create-cases auto-detection:
export TEST_PLAN_OUTPUT_DIR="$target_dir"
/test-plan-create-cases to auto-use the same location if called in same sessionmcp__atlassian__getJiraIssue. The strategy contains both the technical approach (HOW) and the business need (WHAT/WHY). If auto-detected, read the local file from artifacts/strat-tasks/.
components from the Jira response (list of RHOAI product component names, e.g., ["AI Hub", "Model Serving"])components = []Invoke these three forked analyzer skills in parallel using the Skill tool. Each runs in its own isolated context with the strategy and ADR content.
Pass the full strategy content (and ADR content if available) inline in the skill arguments so each sub-agent has the source material.
test-plan.analyze.endpoints: Extracts feature scope (in-scope, out-of-scope, test objectives) and identifies API endpoints/methods under test. Produces findings for Sections 1 and 4.test-plan.analyze.risks: Determines test levels, test types, priority definitions, risks with mitigations, and non-functional requirement assessments. Produces findings for Sections 2, 7, and 8.test-plan.analyze.infra: Identifies test environment configuration, test data, test users, infrastructure, and tooling requirements. Produces findings for Sections 3 and 9.Once all three sub-agents return:
## Gaps sections for Step 3.5mkdir -p <target_dir>/<feature_name>/test_cases
target_dir was determined in Step 0.3target_dir from Step 0.3, so: mkdir -p <feature_name>/test_cases${CLAUDE_SKILL_DIR}/test-plan-template.md using the Read tool<feature_name>/TestPlan.md by filling in the template with the gathered information. Follow the template structure exactly — do not add, remove, or reorder sections. Do NOT write frontmatter manually — Step 3.1 handles it.<feature_name>/README.md with:
After generating TestPlan.md, set its frontmatter using the frontmatter.py script via Bash. This validates the metadata against the schema before writing.
First, auto-detect source type from Jira key prefix:
# Auto-detect source_type from Jira key
if [[ <JIRA_KEY> == RHAISTRAT-* ]]; then
SOURCE_TYPE="strat"
elif [[ <JIRA_KEY> == RHOAIENG-* ]]; then
SOURCE_TYPE="issue"
fi
Then set frontmatter:
IMPORTANT: Run Python scripts from the test-plan repo directory (where pyproject.toml is). Do NOT cd to the output directory before running scripts — use absolute paths for file arguments.
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/frontmatter.py set <absolute_path_to_output_dir>/<feature_name>/TestPlan.md \
feature="<feature_name>" \
source_key=<JIRA_KEY> \
source_type=$SOURCE_TYPE \
version=1.0.0 \
status=Draft \
author="<team_name>" \
components="<comma-separated component names from Jira, or []>" \
additional_docs="<comma-separated list of doc links, or []>")
components: comma-separated list of component names from Jira Components field (e.g., "AI Hub,Model Serving"). Use [] if none.additional_docs: include ADR link and any other document links provided by the user. Use [] if none.last_updated is auto-set to today's date by the script.reviewers defaults to [].If the script exits with an error, fix the field values and retry — do not write frontmatter by hand.
After generating the test plan, collect all gaps reported by the three sub-agents from Step 2.
Extract gaps from each sub-agent's ## Gaps output section
Write <feature_name>/TestPlanGaps.md with all gaps organized by source sub-agent:
# Gaps — <Feature Name>
## Scope & Endpoints
{gaps from test-plan.analyze.endpoints, or "No gaps identified."}
## Test Strategy & Risks
{gaps from test-plan.analyze.risks, or "No gaps identified."}
## Environment & Infrastructure
{gaps from test-plan.analyze.infra, or "No gaps identified."}
Set frontmatter on TestPlanGaps.md using the frontmatter.py script (reuse SOURCE_TYPE from Step 3.1):
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/frontmatter.py set <absolute_path_to_output_dir>/<feature_name>/TestPlanGaps.md \
feature="<feature_name>" \
source_key=<JIRA_KEY> \
status=Open \
gap_count=<number_of_gaps>)
gap_count: total number of individual gaps across all three sectionsstatus=Resolved and gap_count=0last_updated is auto-set by the scriptIf gaps exist, present the user with a structured action menu via AskUserQuestion. List the gaps first, then offer numbered options. Example:
The following gaps were identified in the test plan:
- Endpoint paths for the catalog API are not specified — an API spec or ADR would resolve this
- RBAC roles are unclear — a feature refinement doc would help
- KServe CSI configuration details are missing — a design doc would resolve this
What would you like to do?
- Provide documents — paste file paths (ADR, API spec, design doc) to resolve gaps
- Proceed to review — continue with the test plan as-is (gaps will be noted in TestPlanGaps.md)
- Proceed to review + generate test cases — continue and automatically run
/test-plan-create-casesafter review
If the user chooses option 1: Read the provided documents, re-run only the relevant sub-agents from Step 2 with the new material, update the test plan, update TestPlanGaps.md (removing resolved gaps, adding any new ones), update the gaps frontmatter (gap_count, status), then present the menu again with remaining gaps (if any).
If the user chooses option 2 or no gaps exist: Proceed to Step 4.
If the user chooses option 3: Proceed to Step 4, and after the review is complete, automatically invoke /test-plan-create-cases with the feature directory.
After the gaps flow is complete, invoke the internal test-plan.review skill with the feature directory.
The reviewer runs the quality rubric (5 criteria, 0-2 each, 10-point scale):
The reviewer handles auto-revision internally (up to 2 cycles) and writes <feature_name>/TestPlanReview.md with scores and feedback.
Handle the review output:
Read the verdict from <feature_name>/TestPlanReview.md frontmatter
Auto-fix (if any): Apply clearly correct improvements suggested by the reviewer with these constraints:
Use the Edit tool for any auto-fixes applied.
Present summary: Show the user:
TestPlanGaps.mdIf verdict is Rework: Advise the user to provide additional source documents (ADR, API spec, design doc) to resolve quality issues before generating test cases
/test-plan-resolve-feedback <PR_URL> after publishing)/test-plan-create-casestest-plan.analyze.risks — each category must be addressed or marked Not Applicable/test-plan-create-cases/coverage-assessment$ARGUMENTS