From test-plan
Generate individual test case files from an existing test plan. Use after test plan approval to generate individual TC specifications with preconditions, steps, and expected results organized by category and priority.
npx claudepluginhub opendatahub-io/skills-registry --plugin test-plan[FEATURE_SOURCE] [--output-dir PATH]This skill uses the workspace's default tool permissions.
Generate individual test case specification files from an existing test plan.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Generate individual test case specification files from an existing test plan.
/test-plan-create-cases [FEATURE_DIR]
Examples:
/test-plan-create-cases (auto-detects from prior /test-plan-create run)/test-plan-create-cases mcp_catalog/test-plan-create-cases /path/to/feature_dirParse $ARGUMENTS to extract:
mcp_catalog or /path/to/mcp_cataloghttps://github.com/org/repo/tree/test-plan/RHAISTRAT-400https://github.com/org/repo/pull/5--output-dir (optional): Force creation in specified directory (contributor override, skips validation)If no arguments are provided, check for session context from /test-plan-create:
# Check if TEST_PLAN_OUTPUT_DIR environment variable is set
if [ -n "$TEST_PLAN_OUTPUT_DIR" ]; then
# /test-plan-create was just run in this session
feature_dir="$TEST_PLAN_OUTPUT_DIR/<feature_name>"
echo "✓ Auto-detected from /test-plan-create session: $feature_dir"
# Proceed directly to Step 1 (skip Step 0.2)
fi
If no arguments AND no session context, ask the user via AskUserQuestion:
Where is the TestPlan.md located?
You can provide:
- Local directory path (e.g.,
~/Code/collection-tests/mcp_catalog)- GitHub branch URL (e.g.,
https://github.com/org/repo/tree/test-plan/RHAISTRAT-400)- GitHub PR URL (e.g.,
https://github.com/org/repo/pull/5)
Install the test-plan package (makes all scripts importable):
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv sync --extra dev)
If installation fails, inform the user and do NOT proceed. Once installed, all Python scripts will work from any directory.
Skip this step if session context was found (see "Auto-detection from session" above).
Use the shared locate-feature-dir utility:
result=$(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/repo.py locate-feature-dir "<source>")
if [ $? -ne 0 ]; then
echo "$result"
exit 1
fi
# Parse JSON output
feature_dir=$(echo "$result" | jq -r '.feature_dir')
source_type=$(echo "$result" | jq -r '.source_type')
Validate local paths against skill repository (unless --output-dir flag was used):
if [ "$source_type" = "local" ]; then
# Check for --output-dir flag (contributor override)
FORCE_OUTPUT_DIR="${FORCE_OUTPUT_DIR:-false}"
# Validate against skill repository
export CLAUDE_SKILL_DIR
force_flag=$([ "$FORCE_OUTPUT_DIR" = "true" ] && echo "--force" || echo "")
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/repo.py validate-local-path "$feature_dir" $force_flag) || exit 1
fi
Note: GitHub sources are always external repos, so no skill repo validation needed.
<feature_dir>/TestPlan.md using the Read toolsource_key from the YAML frontmatter — this will be used in Step 3.1 to set frontmatter on each test case fileTC-<CATEGORY>-<NUMBER> prefixes and their meanings<feature_dir>/TestPlanGaps.md exists (generated by /test-plan-create)${CLAUDE_SKILL_DIR}/test-case-template.md using the Read toolCheck for existing test cases:
regen_check=$(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/tc_regeneration.py check <feature_dir>)
mode=$(echo "$regen_check" | jq -r '.mode')
existing_count=$(echo "$regen_check" | jq -r '.existing_count')
If mode = "regenerate" (existing test cases found):
a. Read all existing TC files using Read tool (satisfies Write tool requirement):
echo "$regen_check" | jq -r '.files[]' | while read file; do
# Read each existing TC file
done
b. Ask for confirmation via AskUserQuestion:
Regeneration Mode
Found <existing_count> existing test cases in
test_cases/.Regenerating will overwrite all existing test cases. You can review changes via
git diffbefore publishing.Proceed with regeneration? [yes/no]
c. If no: Exit gracefully
d. If yes: Continue to Step 3 with REGENERATION_MODE=true
If mode = "create" (no existing test cases):
REGENERATION_MODE=falseProcess one category at a time from Section 5.2. For each category:
Design all test cases for that category:
Write or Edit the TC-<CATEGORY>-<NUMBER>.md files for that category immediately before moving to the next:
REGENERATION_MODE=true: Use Edit tool for files that already exist (preserves git history), Write tool for new filesREGENERATION_MODE=false: Use Write tool for all filesInclude YAML frontmatter at the top of each file:
---
test_case_id: TC-<CATEGORY>-<NUMBER>
source_key: <STRAT_KEY_FROM_TEST_PLAN>
priority: <P0|P1|P2>
status: Draft
automation_status: Not Started
last_updated: "<today_date>"
# upgrade_phase: pre|post|both # see Step 3.4 — set for ANY TC whose expected results differ between upgrade states
---
source_key: use the value extracted from the test plan's frontmatter in Step 1last_updated: MUST be quoted string (e.g., "2026-05-04"), not unquoted dateupgrade_phase for every TC before finalising its frontmatter — including TC-UI-, TC-E2E-, and all other categories, not just TC-UPGRADE-*. The question is always the same: does this TC's expected behaviour differ between the old and new version? If yes, set the phase. Do not skip this evaluation for any TC.E2E test cases (mandatory): After processing all categories, generate TC-E2E-*.md test cases that validate the user journeys defined in the strategy:
TC-E2E-<NUMBER> naming convention (e.g., TC-E2E-001, TC-E2E-002)Upgrade test cases (conditional): Read Section 7.2 (Upgrade/Migration) of the TestPlan.md. If Section 7.2 describes meaningful upgrade-specific behaviour (not just "Not Applicable" or a single sentence disclaimer), generate upgrade-aware TCs:
First, identify what kind of upgrade scenario this is — it determines the dominant phase:
post TCs for new behaviour, pre TCs for state that disappears after upgrade, both for regressionsboth TCs — the goal is to establish a PASS baseline before upgrade and detect a REGRESSION afterPhase values and when to use them:
upgrade_phase: pre — behaviour or state that only exists on the old version. Expected to FAIL or be N/A on the new version. Preconditions must state the source version.upgrade_phase: post — behaviour that only exists after upgrade (new feature, new resource, new route). Expected to FAIL on the old version. Preconditions must state the target version.upgrade_phase: both — behaviour that should work on both versions. Use for any TC that establishes a pre-upgrade baseline and validates the same behaviour post-upgrade. E2E TCs spanning the full upgrade journey also use both — even if their steps cross both versions, they need to run on both clusters. Always include at least one UI-capable TC with both so the pre-upgrade run has browser content to execute.upgrade_phase — reserve for TCs that are genuinely unrelated to the upgrade scenario — TCs that would exist identically in a non-upgrade test plan. Within an upgrade-focused test plan, if a TC's expected results should be the same on both versions, use upgrade_phase: both (not no phase) to make its role in the regression suite explicit. "No phase" and both are functionally equivalent in filtering, but both signals intent.Apply upgrade_phase based on what the TC tests, not which category it belongs to. Any TC in any category (TC-UI-, TC-E2E-, etc.) whose expected results or preconditions differ between versions must be tagged. The question is: "Would this TC pass on the old version AND the new version?" If yes to both → both. If only new → post. If only old → pre.
Add upgrade TCs to their own "Upgrade Testing" section in INDEX.md.
This category-by-category approach ensures cross-category awareness (no duplicate coverage) while keeping each batch focused.
Expected Results quality: Each Expected Result must be an observable fact that directly confirms the test objective. Avoid vague conclusions ("works correctly", "renders successfully"). Name the specific page state, URL pattern, response code, element, or resource field.
Before writing each assertion, ask: "Is this testing what the TC is fundamentally about, or just a side effect?" Two patterns follow from this:
Accessibility / reachability tests (does this URL work? does this link open?): assert the absence of error — "page does not contain '500 Internal Server Error'", "response is HTTP 200", "page does not show 'Application is not available'". Do not assert presence of specific UI components (IDE editor pane, console window, specific layout element) — these vary by configuration, workbench image, and product version and will cause failures unrelated to the feature under test.
Content / format tests (does this show the right value? did something change?): assert the specific observable fact — "URL contains hostname pattern X", "field value equals Y", "element Z is visible". Use this only when the content itself IS what is being verified.
A test that FAILs for the wrong reason is worse than no test at all. When in doubt, prefer the narrower assertion.
Anti-hallucination rules:
After all categories are complete (including upgrade TCs if generated):
<feature_dir>/test_cases/ directory if it doesn't already exist: mkdir -p <feature_dir>/test_cases<feature_dir>/test_cases/INDEX.md atomically (regenerate entire file):
Update <feature_dir>/TestPlan.md using the Edit tool:
test_cases/INDEX.md/coverage-assessmentUpdate <feature_dir>/README.md to add a link to the test cases index:
test_cases/INDEX.mdAfter generating all test case files and updating the test plan, validate coverage:
TestPlanGaps.md was read in Step 1.5, verify that no test cases were created for endpoints or areas flagged as pending/missing. If any were, remove them and flag the inconsistency.<feature_dir>/TestPlanGaps.md exists, append a ## Test Case Coverage Gaps section with any coverage gaps found (uncovered endpoints, missing objectives, priority mismatches, missing E2E scenarios). If the file does not exist, create it with just this section.After all test case files are written, validate their frontmatter in one pass:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/validate_test_cases.py <feature_dir> test-case)
If any file fails validation, fix the frontmatter in that file and re-run the validation.
/test-plan-create/coverage-assessment$ARGUMENTS