From test-plan
Update an existing test plan with new documentation (ADR, API specs, design docs). Re-analyzes, updates artifacts, bumps version, and optionally regenerates test cases. Use when requirements evolve or new technical documentation becomes available after initial test plan creation.
npx claudepluginhub opendatahub-io/skills-registry --plugin test-plan<SOURCE> <NEW_DOC_PATH> [<NEW_DOC_PATH>...]This skill uses the workspace's default tool permissions.
Update an existing test plan when new information becomes available (ADRs, API specs, design documents, requirement changes).
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Update an existing test plan when new information becomes available (ADRs, API specs, design documents, requirement changes).
/test-plan-update <SOURCE> <NEW_DOC_PATH> [<NEW_DOC_PATH>...]
Examples:
/test-plan-update ~/Code/collection-tests/mcp_catalog adr.pdf/test-plan-update https://github.com/org/repo/pull/42 api-spec.md design.md/test-plan-update https://github.com/org/repo/tree/test-plan/RHAISTRAT-400 requirements-v2.mdParse $ARGUMENTS to extract:
mcp_catalog or /path/to/mcp_cataloghttps://github.com/org/repo/tree/test-plan/RHAISTRAT-400https://github.com/org/repo/pull/5If insufficient arguments are provided, ask the user via AskUserQuestion:
Where is the test plan to update?
You can provide:
- Local directory path (e.g.,
~/Code/collection-tests/mcp_catalog)- GitHub branch URL (e.g.,
https://github.com/org/repo/tree/test-plan/RHAISTRAT-400)- GitHub PR URL (e.g.,
https://github.com/org/repo/pull/5)
Then ask:
What new documentation should be incorporated?
Provide one or more file paths (ADR, API spec, design doc, requirements doc, etc.):
Install the test-plan package (makes all scripts importable):
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv sync --extra dev)
If installation fails, inform the user and do NOT proceed. Once installed, all Python scripts will work from any directory.
Use the shared locate-feature-dir utility:
result=$(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/repo.py locate-feature-dir "<source>")
if [ $? -ne 0 ]; then
echo "$result"
exit 1
fi
# Parse JSON output
feature_dir=$(echo "$result" | jq -r '.feature_dir')
source_type=$(echo "$result" | jq -r '.source_type')
Validate local paths against skill repository:
if [ "$source_type" = "local" ]; then
# Validate against skill repository (no force flag for updates)
export CLAUDE_SKILL_DIR
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/repo.py validate-local-path "$feature_dir") || exit 1
fi
Note: GitHub sources are always external repos, so no skill repo validation needed.
For each new document path provided:
if [ ! -f "$doc_path" ]; then
echo "❌ ERROR: Document not found: $doc_path"
exit 1
fi
Read <feature_dir>/TestPlan.md using Read tool
source_key, version, feature, components, additional_docsRead <feature_dir>/TestPlanGaps.md (if exists)
Read <feature_dir>/TestPlanReview.md (if exists)
Read <feature_dir>/README.md
Check for test cases: <feature_dir>/test_cases/INDEX.md
has_test_cases=trueFor each new document path:
additional_docs list in frontmatterInvoke the three analyzer skills in parallel using the Skill tool, passing:
Original strategy content (from Jira via source_key, or from local cache)
Existing additional docs (from frontmatter additional_docs)
New documents (just read in Step 2)
test-plan.analyze.endpoints: Re-extract feature scope and API endpoints
test-plan.analyze.risks: Re-determine test levels, types, priorities, risks
test-plan.analyze.infra: Re-identify environment, test data, infrastructure needs
Each analyzer returns:
Invoke the test-plan-merge forked sub-agent using the Skill tool to intelligently merge new analyzer findings into the existing test plan:
Skill: test-plan-merge
Arguments:
Old TestPlan.md: <full content from Step 1>
New Findings from Analyzers:
- Endpoints: <findings from Step 3>
- Risks: <findings from Step 3>
- Infrastructure: <findings from Step 3>
New Documents: <full content from Step 2 with labels>
Example:
ADR (adr-v2.pdf):
<full text content of the ADR>
API Spec (api-spec.md):
<full text content of the API spec>
Rationale: The merge agent needs access to actual document content (not just filenames) to:
The sub-agent has context: fork so it runs in isolation and returns cleanly.
The merge sub-agent returns:
Validate merge (manual review):
Present the change summary to the user:
Merge completed. Changes proposed:
Sections modified: <list from statistics>
User edits preserved in: <list from statistics>
Items added: <count>
Items updated: <count>
Items deprecated: <count>
<full change summary from agent>
Ask user via AskUserQuestion:
Review merge changes before applying
The merge agent updated sections. Review the changes above.
Proceed with these updates? [yes/no]
If no: Stop without applying updates (TestPlan.md unchanged)
If yes: Continue to apply updates
Rationale: Provides manual validation that merge preserved user edits and made sensible decisions. User can review the change summary before committing to the updates.
Apply the updates:
Invoke the test-plan-resolve-gaps forked sub-agent using the Skill tool to cross-reference old gaps with new findings:
Skill: test-plan-resolve-gaps
Arguments:
Old Gaps from TestPlanGaps.md: <content from Step 1>
New Findings from Analyzers:
- Endpoints: <findings from Step 3>
- Risks: <findings from Step 3>
- Infrastructure: <findings from Step 3>
New Documents: <full content from Step 2 with labels>
Example:
ADR (adr-v2.pdf):
<full text content>
Rationale: Same as Step 4 - the agent needs actual document content to verify gap resolution claims (e.g., "API versioning now specified in ADR section 2.3" - agent can check if true).
The sub-agent has context: fork so it runs in isolation and returns cleanly.
The sub-agent returns:
Validate gap count arithmetic:
# Extract counts from sub-agent statistics
resolved_count=<from_statistics>
unresolved_count=<from_statistics>
new_count=<from_statistics>
# Validate arithmetic (original - resolved + new = unresolved)
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/validate_gap_counts.py \
"$feature_dir" $resolved_count $unresolved_count $new_count)
if [ $? -ne 0 ]; then
echo "⚠️ Gap count mismatch detected. Please review resolve-gaps output manually."
# Ask user via AskUserQuestion: Continue anyway? [yes/no]
# If no: exit 1
fi
Update TestPlanGaps.md:
# Get gap count from sub-agent statistics
new_gap_count=<count_of_unresolved_gaps>
# Update status: Open if gaps remain, Resolved if all resolved
new_status=$([ $new_gap_count -eq 0 ] && echo "Resolved" || echo "Open")
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/frontmatter.py set <feature_dir>/TestPlanGaps.md \
gap_count=$new_gap_count \
status=$new_status)
Invoke test-plan.review skill with the updated feature directory:
/test-plan-review <feature_dir>
This generates a new TestPlanReview.md with updated score and verdict.
Handle review output:
If has_test_cases=true (test cases exist), ask the user via AskUserQuestion:
New information has been incorporated into the test plan.
Changes detected:
Do you want to update the existing test cases?
- Yes, update test cases — regenerate affected test cases and add new ones for new coverage
- No, keep test cases as-is — only TestPlan.md has been updated
- Review changes first — show me the diff before deciding
If user selects option 1:
/test-plan-create-cases <feature_dir> to regenerate test casesIf user selects option 3:
Update the README with:
## Changelog
### v1.1.0 (2026-04-23)
- Updated with new API specification
- Resolved 3 gaps from TestPlanGaps.md
- Added 2 new endpoints to Section 4
Determine version bump:
1.0.0)1.0.0 → 1.1.01.1.0 → 1.2.0Update TestPlan.md frontmatter:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/frontmatter.py set <feature_dir>/TestPlan.md \
version=$new_version \
additional_docs="<updated_comma_separated_list>")
Update status if needed:
Present final summary to user:
Test plan updated successfully
- Location:
<feature_dir>- Version: <old_version> → <new_version>
- Quality score: <old_score>/10 → <new_score>/10
- Verdict:
- Gaps resolved:
- Gaps remaining:
- Test cases: <updated|unchanged>
Updated artifacts:
- TestPlan.md
- TestPlanGaps.md
- TestPlanReview.md
- README.md
- test_cases/ (if regenerated)
Next steps:
- Review changes:
git diff(if in git repo)- Publish updates:
/test-plan-publish <feature_name>
/test-plan-create for that/test-plan-resolve-feedback for that/test-plan-publish after updating$ARGUMENTS