From test-plan
Score an existing test plan using the quality rubric without triggering auto-revision. Use for standalone quality assessment of test plans or evaluating test plans created outside the automated generation pipeline.
npx claudepluginhub opendatahub-io/skills-registry --plugin test-plan<feature_dir>This skill uses the workspace's default tool permissions.
Score an existing test plan using the 5-criteria quality rubric (Specificity, Grounding, Scope Fidelity, Actionability, Consistency). This is the user-facing entrypoint for rubric evaluation.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Score an existing test plan using the 5-criteria quality rubric (Specificity, Grounding, Scope Fidelity, Actionability, Consistency). This is the user-facing entrypoint for rubric evaluation.
/test-plan-score <feature_dir>
Examples:
/test-plan-score kagenti_agent_templates/test-plan-score mcp_catalogParse $ARGUMENTS to extract:
TestPlan.mdInstall the test-plan package (makes all scripts importable):
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv sync --extra dev)
If installation fails, inform the user and do NOT proceed. Once installed, all Python scripts will work from any directory.
<feature_dir>/TestPlan.mdsource_key:
(cd $(git -C ${CLAUDE_SKILL_DIR} rev-parse --show-toplevel) && uv run python scripts/frontmatter.py read <feature_dir>/TestPlan.md)
source_key:
mcp__atlassian__getJiraIssue with issueIdOrKey=<source_key>
If MCP is unavailable, check for a local strategy file in artifacts/strat-tasks/<source_key>.md. If neither is available, warn the user and proceed — grounding and scope fidelity will be scored based on plan consistency only.Read the score agent prompt from skills/test-plan-review/prompts/score-agent.md.
Launch a forked score agent with substitutions:
{FEATURE_DIR} = feature directory path{TEST_PLAN_PATH} = <feature_dir>/TestPlan.md{STRATEGY_TEXT} = raw strategy description text from Step 1{CALIBRATION_DIR} = skills/test-plan-review/calibration/Parse the score agent's output and present the results directly to the user:
## Test Plan Score — {feature_name}
### Rubric Scores
| Criterion | Score | Notes |
|-----------|-------|-------|
| Specificity | {n}/2 | {brief rationale} |
| Grounding | {n}/2 | {brief rationale} |
| Scope Fidelity | {n}/2 | {brief rationale} |
| Actionability | {n}/2 | {brief rationale} |
| Consistency | {n}/2 | {brief rationale} |
**Total: {sum}/10**
### Verdict
{If >= 8, no zeros: "**Ready** — proceed to test case generation"}
{If = 7, no zeros: "**Revise** — minor improvements needed. Re-run via `/test-plan-create` flow to apply auto-revision, or invoke the internal `test-plan.review` workflow from automation."}
{If < 7 or any zero: "**Rework** — significant issues. Re-run via `/test-plan-create` flow for remediation, or use automation that calls internal `test-plan.review`."}
### Grounding Cross-Reference
{Include the full grounding cross-reference table from the scorer}
/test-plan-create flow (which calls internal test-plan.review)$ARGUMENTS