From test-plan
Generate executable test automation code from test case specifications with intelligent placement in component or downstream repos. Use after test cases are reviewed to create production-ready pytest code that follows repository conventions.
npx claudepluginhub opendatahub-io/skills-registry --plugin test-plan<FEATURE_SOURCE> [--test-cases TC-ID,TC-ID] [--target-repo PATH]This skill uses the workspace's default tool permissions.
Generate executable test automation code (pytest, etc.) from TC-*.md test case specification files, with intelligent auto-placement in component repos or downstream E2E repo.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Generate executable test automation code (pytest, etc.) from TC-*.md test case specification files, with intelligent auto-placement in component repos or downstream E2E repo.
/test-plan-case-implement <FEATURE_SOURCE> [--test-cases TC-API-001,TC-API-002] [--target-repo ~/Code/opendatahub-tests]
Examples:
/test-plan-case-implement features/notebooks/RHAISTRAT-400-notebook-spawning/test-plan-case-implement https://github.com/fege/collection-tests/pull/7 (GitHub PR)/test-plan-case-implement test-plan/RHAISTRAT-400 (GitHub branch)/test-plan-case-implement https://github.com/fege/collection-tests/pull/7 --test-cases TC-API-001,TC-API-002 (selective)/test-plan-case-implement features/notebooks/RHAISTRAT-400 --target-repo ~/Code/opendatahub-testsNote: After publishing a test plan, artifacts only exist on the PR branch. Pass the PR URL:
/test-plan-publish
/test-plan-case-implement https://github.com/fege/collection-tests/pull/7
Parse $ARGUMENTS to extract:
features/notebooks/RHAISTRAT-400-notebook-spawninghttps://github.com/org/repo/pull/7https://github.com/org/repo/tree/test-plan/RHAISTRAT-400 or test-plan/RHAISTRAT-400--test-cases (optional): Comma-separated list of test case IDs to implement (e.g., TC-API-001,TC-API-002,TC-E2E-001)--target-repo (optional): Override auto-detected target repository path or URLIf the first argument is missing or starts with --, fail with usage error showing required format and PR/local path examples.
Install the test-plan package (makes all scripts importable):
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv sync --extra dev)
If installation fails, inform the user and do NOT proceed.
Parse the first argument from $ARGUMENTS (strip any leading/trailing whitespace, ignore flags).
If no feature source provided or first arg starts with --, exit with the error message from the "From arguments" section above.
If feature source is a GitHub branch or PR URL:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/repo.py locate-feature-dir "<feature_source>")
Extract feature_dir from the JSON result.
If feature source is a local path, use it directly as feature_dir.
Run unified preflight validation and detection:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/preflight.py "$feature_dir")
The script returns JSON with:
valid (bool) - If false, show error and stopfeature_dir, tc_count, testplan_frontmatterfrontmatter_components, content_components, all_componentsrepos (component → repo mapping)unique_repos (list of detected repositories)repos_from_frontmatter (repos from Jira components - highest priority)odh_test_context_path (or null if not found)Extract values from the JSON result as needed for subsequent steps.
If odh_test_context_path == "null", ask user via AskUserQuestion:
odh-test-context not found. Provides test conventions for ~162 opendatahub-io repos.
- Specify path to existing clone
- Clone from GitHub to ~/Code/
- Proceed without it (slower, less accurate)
Handle user choice.
Based on unique_repos from preflight:
If 1 repo: Ask "Proceed with {repo}?" (yes/specify-different)
If multiple repos: Show list prioritized by frontmatter components, ask user to choose
If no repos: Ask user to specify repository manually
Store: code_repo (e.g., opendatahub-io/notebooks)
Find code repo locally:
code_repo_path=$(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/repo.py find-target "$code_repo")
If not found, ask to clone or specify path.
IMPORTANT: When analyzing code_repo_path: Read code files and use grep/bash. Do NOT import target repo dependencies (not in test-plan venv) or use inspect.signature().
Call load_repo_test_context(repo_name, odh_test_context_path) from scripts/utils/repo_utils.py.
The function:
<odh_test_context_path>/tests/<repo_name>.jsonSet variables:
test_context = function result (dict or None)use_odh_context = True if test_context is not None, else FalseIf test_context exists, save it to <feature_dir>/test_implementation_context.json for reference.
Call load_repo_test_context('opendatahub-tests', odh_test_context_path) from scripts/utils/repo_utils.py.
The function returns: context dict or None
Set downstream_context:
{'testing': {'framework': 'pytest'}, 'agent_readiness': 'medium'}Use scripts/utils/repo_utils.py::get_framework(test_context):
If test_context exists: returns test_context['testing']['framework']
If NOT (manual detection):
Returns: framework (str: pytest, unittest, playwright, robot, ginkgo, go-testing, jest, cypress)
If use_odh_context == True:
Extract conventions and format as markdown:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/extract_and_format_conventions.py "$feature_dir" "$code_repo_name" "$odh_test_context_path") > <feature_dir>/test_implementation_conventions.md
The script:
test_implementation_context.json to feature_dirSet conventions_file = <feature_dir>/test_implementation_conventions.md
If use_odh_context == False (no odh-test-context available):
tests/<repo_name>.json with discovered framework, test directories, conventions, linting toolstests/ directory as examplesStore: conventions (dict or markdown content)
Load repo instructions and pattern guides:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/load_pattern_guides.py "$code_repo_path" "$framework")
Returns JSON with:
repo_instructions_files - Found CLAUDE.md, AGENTS.md, CONSTITUTION.mdrepo_instructions_content - Combined contentpattern_guide_files - Found {framework}-tests.md, testing-standards.mdpattern_guide_content - Combined contentneeds_generation - true if no pattern guides foundIf needs_generation == true:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/repo.py find-known tiger-team)
/test-rules-generator <code_repo_path> to generate guidesPattern guides describe HOW to write tests (fixtures, naming, mocking). Passed to code generation sub-agents in Step 5.
If use_odh_context == True AND test_context contains container_recipe:
Show user the container validation option via AskUserQuestion:
Container validation is available using odh-test-context.
- Base image: <container_recipe.base_image>
- Can validate linting and test execution in isolated environment
Validate generated tests in container after creation? [yes/no]
If yes: Set validate_in_container = True and store validation_recipe = test_context['container_recipe']
If no: Set validate_in_container = False
If container recipe NOT available:
validate_in_container = FalseExtract repository capabilities from Step 1:
code_repo_readiness from test_context.get('agent_readiness', 'unknown')code_repo_has_tests from checking if 'tests' in test_context.get('testing', {}).get('directories', [])downstream_readiness from downstream_context.get('agent_readiness', 'medium')Invoke test-plan.analyze.placement forked subagent:
placement_decisions = invoke_skill_forked(
"test-plan.analyze.placement",
args={
'feature_dir': feature_dir,
'code_repo': code_repo,
'code_repo_readiness': code_repo_readiness,
'code_repo_has_tests': code_repo_has_tests,
'downstream_readiness': downstream_readiness
}
)
The subagent analyzes each TC and returns placement recommendations with:
Placement Philosophy (applied by subagent):
Store the returned placement decisions in test_cases list (each TC dict includes placement_location, level, scores, reasons).
If any TCs are placed downstream or both, locate downstream repository:
downstream_repo_path=$(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/repo.py find-target "opendatahub-io/opendatahub-tests")
downstream_repo_path=$(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/repo.py clone "<downstream_url>" "~/Code/opendatahub-tests")
downstream_repo_pathIf --test-cases was provided:
TC-API-001,TC-API-002,TC-E2E-001)test_cases/selected_test_cases = [parsed TC IDs]mode = "selective"If --test-cases was NOT provided:
test_cases/INDEX.mdselected_test_cases = [all TC IDs]mode = "batch"Present summary:
Implementing <N> selected test case(s): <TC IDs>Implementing ALL test cases for feature: <feature_name>. Total: <N> test casesShow counts by priority (P0/P1/P2) and by category.
Ask for confirmation via AskUserQuestion: Proceed? [yes/no]
Filter test cases by automation_status:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/filter_test_cases.py "$feature_dir" $selected_test_cases)
Returns JSON with to_implement and already_implemented arrays.
If already_implemented is not empty, ask via AskUserQuestion: Re-implement these? [yes/no]
Use to_implement list for subsequent steps.
Parse all selected test cases into structured data:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/parse_test_cases.py "$feature_dir" $selected_test_cases)
Returns JSON array of TC dicts, each containing:
Store the parsed TCs and add placement decisions from Step 2.2 to each TC dict.
This test_cases array will be passed to the sub-agent in Step 5.
CRITICAL: Call map_test_files.py script - do NOT manually create mappings or read existing test files.
Determine file organization strategy from conventions:
test_context shows subdirectories (unit/, api/, etc.) → by-category-with-subdirsby-category (flat structure, one file per category)Call the script to generate file mapping:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/map_test_files.py \
"$feature_dir" "$org_pattern" "$test_dir" \
--feature-name "$feature_name" \
--tc-ids "$(echo $selected_test_cases | tr ' ' ',')")
Parse the JSON output to extract:
file_mapping - Array of {file_path, test_cases[], function_names[]}strategy, total_test_cases, total_filesThe script handles:
DO NOT manually create /tmp/*.json files, read existing test files, or generate file paths yourself. Use the script output directly.
Present the mapping table to user and ask for confirmation before proceeding to Step 5.
CRITICAL: Invoke /test-plan-generate-test-file sub-agents in parallel (one per file) - do NOT generate code yourself.
Identify common setup requirements:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/analyze_common_setup.py "$feature_dir")
Returns JSON array of preconditions used by 2+ TCs (for fixture generation).
Ensure these variables are in context (sub-agents inherit them):
file_mapping - Array from Step 4test_cases - Array from Step 3.3framework - From Step 1.1conventions_file - Path to conventions (from Step 1.2)pattern_guide - Content from Step 1.2b (or null)repo_instructions - Content from Step 1.2b (or null)common_setup_requirements - From analyze_common_setup.py abovecode_repo_path - Repository pathfeature_dir - Feature directory pathInvoke sub-agents in parallel using Agent tool (one per file in file_mapping):
For each file index i, build prompt with all needed data:
Agent(
subagent_type="test-plan-generate-test-file",
description="Generate test file {i}",
prompt="""Generate test file from this data:
```json
{
"file_index": {i},
"file_path": "{file_mapping[i].file_path}",
"test_cases": {json array of TCs for this file},
"function_names": {file_mapping[i].function_names},
"framework": "{framework}",
"conventions_file": "{conventions_file}",
"pattern_guide": "{pattern_guide or null}",
"repo_instructions": "{repo_instructions or null}",
"common_setup_requirements": {common_setup_requirements},
"code_repo_path": "{code_repo_path}",
"feature_dir": "{feature_dir}"
}
Write result to /tmp/test_plan_results/file_{i}.json Return: {{"status": "complete", "file_index": {i}, "result_file": "/tmp/test_plan_results/file_{i}.json"}}""" )
**All invocations in one message** for parallel execution. Sub-agents have `context: fork` (isolated, returns clean).
**Read result files** after all agents complete:
```bash
for i in $(seq 0 $((${#file_mapping[@]} - 1))); do
result=$(cat /tmp/test_plan_results/file_${i}.json)
# Parse: file_path, content, tc_ids, functions[], quality_summary, draft_files[], errors[]
done
rm -rf /tmp/test_plan_results/
Collect into files_to_write array. Proceed immediately to Step 6.
CRITICAL: Write the files from files_to_write array. Do NOT generate or modify test code - just write what the sub-agents returned.
For each entry in files_to_write:
mkdir -p <dirname>python -m py_compile <file_path>For each written file:
cd <target_repo_path>
python -c "import sys; sys.path.insert(0, '.'); exec(open('<file_path>').read())"
If validate_in_container == True:
Start container:
podman run -d --name test-context-<repo_name>-validation \
-v <target_repo_path>:/app:Z \
-w /app \
<validation_recipe.base_image> \
sleep infinity
Install system dependencies:
podman exec test-context-<repo_name>-validation bash -c \
"apt-get update && apt-get install -y <system_deps>"
Run setup commands from validation_recipe.setup_commands
Run lint on generated files: Report lint results (pass/fail)
Run tests on generated files: For each generated test file, run pytest and report results
Cleanup container:
podman rm -f test-context-<repo_name>-validation
Present validation summary to user.
Build updates array from sub-agent results (ONLY for successfully implemented TCs):
For each sub-agent result:
functions array (these scored 4+): Create update entrydraft_files (scored 0-3, need manual review)errors (generation failed)Update frontmatter in bulk:
# updates.json: [{"tc_id": "TC-API-001", "automation_status": "Implemented", "file": "...", "function": "..."}]
echo "$updates_json" | (cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/update_tc_frontmatter.py "$feature_dir" -)
Returns JSON with updated_count, updated_tcs, errors. Show any errors to user.
If feature source is a GitHub branch, commit updated TC files:
git add <feature_dir>/test_cases/*.md
git commit -m "test-plan(<source_key>): mark TCs as implemented"
git push origin <branch_name>
Aggregate quality data from all sub-agent results:
Display implementation summary:
$ARGUMENTS