From code-quality
Use when user requests user-guided test plans, UAT validation, acceptance criteria definition, or user journey testing. Triggers on: "test plan", "user journey test", "UAT", "acceptance criteria", "manual test plan", "user-guided test", "validate from user perspective", "walk me through testing", "define what to test". Takes an implementation plan file as input and produces a test plan document with user personas, Given/When/Then scenarios, manual UAT steps, traceability matrix, and optional BDD .feature files. Annotates the input plan file so downstream skills (/swarm, /plan-review, /pr-review, /fix, /quality-gate) discover and consume the test plan automatically.
npx claudepluginhub wgordon17/personal-claude-marketplace --plugin code-qualityThis skill is limited to using the following tools:
Produces a user-facing test plan from an existing implementation plan file. The test plan is
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Produces a user-facing test plan from an existing implementation plan file. The test plan is written as a human-walkthrough document (personas, Given/When/Then scenarios, manual UAT steps, traceability matrix) and annotated back into the plan file so all downstream skills discover it automatically.
Standalone mode is not supported. This skill requires an implementation plan file as input. The canonical workflow is:
/incremental-planning → produces plan file (hack/plans/{run-id}-<feature>.md)
↓
/test-plan <plan-file-path> → enriches plan file + writes test plan doc
↓
/swarm (reads plan file as always) → discovers test plan via ## Test Plan annotation
/test-plan hack/plans/{run-id}-<feature>.md
The plan file path is required. If not provided, ask via AskUserQuestion:
"Which plan file should I generate a test plan for? Provide the path to a plan file
(e.g., hack/plans/feat-auth-1711388400-session-auth.md)."
Read and parse the input plan file before doing anything else.
Validate plan file path — Before reading the plan file, normalize the provided path
(resolve .. segments, collapse ./ sequences). Verify the normalized path falls within
the current working directory. If the path escapes the CWD boundary, stop with an error:
"Plan file path is outside the project boundary. Provide a path within the current
working directory."
Read the plan file — Extract:
**Branch:** field) for downstream cross-session discovery**Goal:** field) — 1-sentence feature description**Tech Stack:** field) — language/framework context## Test Plan section (if present — stop and report if already annotated):
"This plan file already has a test plan. Run /test-plan on a fresh plan or delete
the ## Test Plan section from {plan_file} to regenerate."Read project memory — Detect {memory_dir} per
code-quality/references/project-memory-reference.md (Directory Detection and Worktree
Resolution sections). Then read:
{memory_dir}/PROJECT.md — architectural decisions, domain context{memory_dir}/LESSONS.md — past lessons (if exists). Silently incorporate.Generate {run-id} — Generate the run-id early so it is available for the staging file
in Phase 2. Follow the Run-ID Naming Convention in
code-quality/references/project-memory-reference.md: {branch-slug}-{unix-timestamp}
(e.g., feat-auth-1711388400).
Identify user-facing tasks — Not every plan task represents a user-visible behavior. Scan task titles and steps for user-facing surface area:
Detect test runner and BDD infrastructure — Search project dependency files for both the project's test runner and any existing BDD tooling. Both are needed: the test runner informs Phase 3's BDD framework recommendations; the BDD detection determines whether Phase 3 auto-selects or asks the user.
Test runner detection:
| File | Pattern | Test Runner |
|---|---|---|
package.json | vitest | Vitest |
package.json | jest or @jest/ | Jest |
package.json | mocha | Mocha |
pyproject.toml / requirements*.txt | pytest | pytest |
go.mod | file exists (Glob) | go test |
Cargo.toml | file exists (Glob) | cargo test |
build.gradle / pom.xml | junit or testng | JUnit / TestNG |
Gemfile | rspec | RSpec |
Gemfile | minitest | Minitest |
For rows with a grep pattern: use Grep with output_mode: "files_with_matches".
For rows marked "file exists (Glob)": use Glob to check whether the file exists in the
project root (e.g., Glob("go.mod")). Record test_runner (or unknown). If multiple
match, resolve by this priority ladder:
"test": "vitest run") → that runner wins,
even if the scripts value uses a prefix or flags (substring match is sufficient).unknown; Step 7 will ask the user.BDD infrastructure detection:
| File | Pattern | Framework |
|---|---|---|
pyproject.toml | pytest-bdd | Python/pytest-bdd |
requirements*.txt | pytest-bdd | Python/pytest-bdd |
go.mod | godog or github.com/cucumber | Go/godog |
package.json | @cucumber/cucumber or cucumber | Node.js/Cucumber.js |
Cargo.toml | cucumber | Rust/cucumber-rs |
build.gradle or pom.xml | cucumber-java | Java/Cucumber-JVM |
Gemfile | cucumber | Ruby/Cucumber |
Search using Grep with output_mode: "files_with_matches". Record detected BDD
framework (or none) and the detected file path for Phase 5 (BDD Staging).
Resolve unknown test runner — If test_runner is still unknown after the detection
tables above, ask via AskUserQuestion before proceeding:
"I couldn't detect your test runner automatically. What test framework does this project use?
(e.g., Vitest, Jest, pytest, go test, cargo test)"
Use the answer as test_runner for all subsequent phases. Before writing test_runner into
any annotation field, sanitize the value: collapse to a single line (strip newlines), strip
markdown bold markers (**), backtick sequences, HTML comment delimiters (<!--, -->),
and pipe characters (|). Store only the sanitized value.
This resolution happens once in Phase 0 so both the BDD auto-select branch (when BDD infra
exists) and the research branch (when no BDD infra exists) receive a known test_runner value.
Print ingestion summary:
Plan ingested: {plan_file}
Goal: {goal}
Branch: {branch}
Tech stack: {tech_stack}
Test runner: {test_runner}
User-facing tasks: {N} of {total_tasks}
BDD infra: {framework | none detected}
Map the user journeys before writing acceptance criteria. Understanding WHO the users are and WHAT they're trying to accomplish shapes every scenario.
Ask via AskUserQuestion:
"I've identified {N} user-facing tasks in this plan. Before I write scenarios, I need to understand the users:
You can answer as briefly as you like. I'll derive the personas from your answers."
Derive at least 2 personas from the answers:
If the domain is unfamiliar (e.g., the plan involves specialized business logic, compliance
requirements, or domain-specific workflows not obvious from the tech stack), invoke
/deep-research in Bridged mode before defining personas:
Skill("deep-research", "Research {domain} user journeys and acceptance criteria
patterns to inform user personas for {goal}. Mode: Bridged")
Feed research findings into persona definitions.
For each persona, construct the user journey:
Entry Point → Action 1 → Action 2 → ... → Expected Outcome
Identify:
Map each journey step to the plan tasks identified in Phase 0. This forms the traceability backbone.
Present the journey map in chat (not to a file yet):
"I've mapped the user journeys. Here's what I see:
Primary persona: [Name] — [description] Journey: [Entry] → [A1] → [A2] → [Outcome] Happy path covers Tasks 1, 3. Error paths touch Task 2.
Edge-case persona: [Name] — [description] Journey: [Entry] → [A1 variant] → [Outcome variant] Additional coverage for Task 2 edge case.
Proceeding to write acceptance criteria."
Write Given/When/Then acceptance criteria for each user-facing behavior identified in Phase 0.
{plan-task: Task N}Assign IDs sequentially: S1, S2, S3, etc. IDs are stable — do not renumber.
### S{N}: {Title} {plan-task: Task N}
**Persona:** {primary | edge-case}
Given {initial state or precondition}
When {user performs this action}
Then {expected outcome observable to the user}
[And {additional condition} (optional, max 1)]
Write scenarios to a working list in memory (not to a file yet). Scenarios will be written to the test plan document in Phase 4.
After drafting all scenarios, present them in chat as a numbered list with titles only.
Then ask via AskUserQuestion:
"I've drafted {N} scenarios across {M} plan tasks. Here's the list:
{S1: title (Task N) — primary} {S2: title (Task N) — primary} {S3: title (Task N) — edge-case} ...
Questions:
(Answer 'looks good' to proceed, or describe adjustments.)"
Incorporate feedback before moving to Phase 3.
After incorporating user feedback, batch the finalized scenarios and spawn parallel Sonnet agents to review them for testability and completeness:
Batch scenarios into groups of 5 and spawn one Agent (sonnet model) per batch (max 8 agents).
Each reviewer receives the full scenario list for cross-reference but reviews only its assigned
batch. Output per reviewer: for each scenario, testable: yes|no,
completeness: [missing conditions], specificity: [vague terms]. Present scenarios flagged
as not testable or incomplete to the user for revision.
Write the finalized scenario list to a staging file before proceeding to Phase 3. Use the
memory directory detected in Phase 0: {memory_dir}/test-plans/{run-id}-scenarios-draft.md
(fallback: ~/.claude/test-plans/{run-id}-scenarios-draft.md). The {run-id} was generated
in Phase 0 Step 4. This protects against context recycling loss during the multi-step Phase 3-4
interval and ensures concurrent /test-plan runs do not collide. Phase 4 reads from this file
instead of memory. After Phase 4 writes the full test plan document, delete the staging file.
Determine whether to generate BDD .feature files in addition to the UAT document.
If BDD infra detected in Phase 0:
Automatically select UAT + BDD mode. No question needed — the project already uses BDD,
so .feature files are expected. Print:
BDD infra detected ({framework} in {file}). Generating UAT document + .feature files.
Record mode based on the detected framework's relationship to the test runner: if the
detected BDD framework is a plugin for test_runner (e.g., pytest-bdd for pytest,
jest-cucumber for Jest), set mode to UAT + BDD (native integration). If it is a
standalone framework (e.g., godog, Cucumber.js, cucumber-rs), set mode to
UAT + BDD (standalone).
Set BDD Setup Needed: no — the framework is already installed. Scan the project for
existing .feature files and step definitions to determine bdd_feature_dir and bdd_step_dir.
If none found, use the framework's conventional defaults: Python/pytest-bdd → tests/features/,
tests/step_defs/; Go/godog → features/, features/; Node.js/Cucumber.js → features/,
features/step_definitions/; Rust/cucumber-rs → tests/features/, tests/; Java/Cucumber-JVM
→ src/test/resources/, src/test/java/; Ruby/Cucumber → features/, features/step_definitions/.
If NO BDD infra detected:
Before presenting options, research BDD integration options that are compatible with the project's actual test runner. This prevents recommending frameworks that conflict with the existing test setup (e.g., suggesting Cucumber.js for a Vitest project).
(test_runner is always known at this point — unknown was resolved in Phase 0 Step 7.)
Skill("deep-research", "BDD and Gherkin integration options for {test_runner}
in {tech_stack} projects. Compare: (1) native BDD plugins that integrate directly with
{test_runner} (e.g., vitest-cucumber-plugin for Vitest, jest-cucumber for Jest, pytest-bdd
for pytest), (2) standalone BDD frameworks (Cucumber.js, godog, cucumber-rs) and their
compatibility with {test_runner}, (3) using Gherkin .feature files as specification-only
documentation (not executable). For each viable option: package name, registry page,
last release date, open issues count, compatibility notes with {test_runner}, setup
complexity. Exclude options unmaintained (no release in 12+ months) or incompatible
with {test_runner}. Mode: External")
Fallback: If /deep-research is unavailable, errors, or returns no viable options,
present options A and B only (no C/D), with this note:
"BDD framework research was inconclusive — viable framework and install commands could not be
determined for {test_runner}. Options C/D require confirmed package compatibility and have been
omitted. Choose A (no BDD) or B (specification-only .feature files). If you know which BDD
framework you want, set it up manually after the plan is complete."
Do NOT fall back to the Phase 5 BDD Toolchain Reference table — that table applies only to
projects that already have BDD installed; it does not provide install commands suitable for
fresh BDD setup on arbitrary test runners.
If research succeeds: Use the research findings to build a project-aware options list.
Present via AskUserQuestion:
"No BDD framework detected. Your project uses {test_runner} for testing.
Based on research, here are the BDD options compatible with {test_runner}:
A) Manual UAT checklist only (no BDD) — Human-readable walkthrough with pass/fail checkboxes. No BDD setup required.
B) Gherkin specification files only (documentation, not executable) — Generates .feature files as living specifications. Tests remain in {test_runner}. — No additional framework needed.
{If research found a native {test_runner} BDD plugin:} C) {test_runner}-native BDD ({plugin_name}) — {1-sentence from research: what it does, maintenance status} — Tests run through {test_runner}. No second test runner.
{If research found a compatible standalone BDD framework:} D) Standalone BDD framework ({standalone_name}) — Adds a separate test runner alongside {test_runner}. — {1-sentence from research: tradeoffs}
{Omit C if no viable native plugin found. Omit D if no compatible standalone found. If neither C nor D is viable, note this and explain why.}
Recommendation: {research-informed — prefer native integration (C) over standalone (D) over specification-only (B). Recommend A if the plan is simple and BDD adds no value.}
Which option?"
Record the selected mode as mode:
Manual UATUAT + BDD (feature files only) — set bdd_framework: none (specification only),
BDD Setup Needed: no. Omit install/scaffold/test commands from annotation.UAT + BDD (native integration) — use the researched plugin's install/scaffold/test
commands. Record bdd_framework as the plugin name.UAT + BDD (standalone) — use the standalone framework's commands.Store the research-derived bdd_install_cmd, bdd_scaffold_cmd, bdd_test_cmd,
bdd_framework, bdd_feature_dir, and bdd_step_dir from the selected option.
For option B, these are all empty/none — Phase 6 handles this format.
Ask as a follow-up (or combine with the above if Manual UAT):
"Should I include exploratory test charters for any high-uncertainty areas? Exploratory charters are brief mission statements for manual testing sessions where the behavior isn't fully specified: 'Explore [target] with [resources] to discover [information]'.
Answer 'yes' if any plan tasks involve:
(yes/no or describe the areas)"
Record include_charters: true/false and any specified areas.
Write the test plan document. This is the primary artifact — a human-readable document that a user can print out, follow step by step, and mark pass/fail.
Use the two-stage fallback:
{memory_dir} is confirmed (detected in Phase 0): write to
{memory_dir}/test-plans/{run-id}.md
(create test-plans/ subdirectory if it doesn't exist)~/.claude/test-plans/{run-id}.md
(create directory if it doesn't exist)Use the {run-id} generated in Phase 0 Step 4.
Do NOT create a hack/ directory if one doesn't exist. Only write to confirmed existing
memory directories or the ~/.claude/ fallback.
Fallback limitation: When the ~/.claude/ fallback is used, downstream skills validate
test plan paths against their own {memory_dir}/test-plans/. If the downstream skill's
{memory_dir} differs from ~/.claude/, path validation rejects the test plan (empty string
fallback). The test plan document remains useful standalone for manual UAT reference.
Print: "Writing test plan to: {output_path}"
Write the complete test plan document using this exact format:
# Test Plan: {Goal from plan header}
**Source Plan:** {plan_file_path}
**Branch:** {branch from plan header}
**Date:** {YYYY-MM-DD}
**Mode:** {mode}
---
## User Personas
- **Primary — {Name}:** {description, goals, context, technical level}
- **Edge Case — {Name}:** {description, goals, context, what makes them different}
---
## User Journey Map
### {Primary Persona Name}
{Entry Point} → {Action 1} → {Action 2} → {Expected Outcome}
Happy path: {1-sentence description of the successful flow}
Error paths: {list the failure conditions}
### {Edge Case Persona Name}
{Entry Point variant} → {Action 1 variant} → {Outcome variant}
---
## Scenarios
### S{N}: {Title} {plan-task: Task N}
**Persona:** {primary | edge-case}
**Preconditions:** {setup required before the test can be executed}
```gherkin
Given {initial state}
When {user action}
Then {expected outcome}
Manual Steps:
Pass criteria: {one sentence — what "pass" looks like} Fail indicators: {one sentence — what "fail" looks like}
[repeat for each scenario]
| Plan Task | Scenario(s) | .feature File | Status |
|---|---|---|---|
| Task 1: {title} | S1, S2 | {feature_file | N/A} |
| Task 2: {title} | S3 | {feature_file | N/A} |
| Task N: {title} (internal) | — | — | Not covered (internal only) |
[Include only if include_charters == true. Omit section entirely if not.]
Explore {target system or feature area} with {available resources: dev tools, test data, user accounts} to discover {what you want to learn: edge cases, failure modes, performance limits}.
Duration: 30-60 minutes Persona: {which persona to use} Record: {what to log during the session}
[repeat for each charter]
Write the document to `{output_path}` using the Write tool.
---
## Phase 5 — BDD Staging (Conditional)
**Skip this phase entirely if mode is `Manual UAT`.**
Run when mode is any BDD variant: `UAT + BDD (feature files only)`,
`UAT + BDD (native integration)`, or `UAT + BDD (standalone)`.
### Feature File Generation
For each user-facing plan task, generate one `.feature` file containing all scenarios
for that task. Feature files use standard Gherkin format:
```gherkin
Feature: {task title}
Background:
{shared preconditions across scenarios, if any}
Scenario: {S1 title}
Given {initial state}
When {user action}
Then {expected outcome}
Scenario: {S2 title}
Given {initial state}
When {user action}
Then {expected outcome}
Feature file naming: {task-title-kebab-case}.feature
Example: user-login-flow.feature, password-reset.feature
Output directory: {memory_dir}/test-plans/{run-id}-features/
(or ~/.claude/test-plans/{run-id}-features/ for the fallback)
Write each .feature file using the Write tool.
If Phase 3 ran /deep-research (no existing BDD infra): use the research-derived values stored in Phase 3. The research already identified the correct framework, install commands, and compatibility with the project's test runner. Skip this section.
If Phase 3 auto-selected BDD (existing BDD infra detected in Phase 0): derive values from the detected framework using this fallback table:
| Language | Framework | Install Command | Scaffold Command | Test Command |
|---|---|---|---|---|
| Python | pytest-bdd | uv add --dev pytest-bdd>=7.0 | pytest-bdd generate {feature} or pytest --generate-missing | uv run pytest |
| Go | godog + gherkingen | go get github.com/cucumber/godog@v0.15.0 | gherkingen {feature.file} | go test ./features/... |
| Node.js | Cucumber.js | npm install --save-dev @cucumber/cucumber@^11.0 | npx cucumber-js (auto-generates snippets) | npx cucumber-js |
| Rust | cucumber-rs | cargo add --dev cucumber | Run tests with .fail_on_skipped() — reports unmatched steps | cargo test |
| Java | Cucumber-JVM | <dependency>io.cucumber:cucumber-java</dependency> | Run features — auto-generates snippets | mvn test |
| Ruby | Cucumber | gem install cucumber | cucumber --dry-run — generates undefined step snippets | bundle exec cucumber |
This table is a fallback for projects that already have BDD installed. For new BDD setups, Phase 3's /deep-research provides test-runner-aware recommendations that supersede this table.
Record:
bdd_install_cmd — the install command (from research or fallback table)bdd_scaffold_cmd — the scaffold command to generate step definition skeletonsbdd_test_cmd — the command to run the BDD test suitebdd_framework — the framework namebdd_feature_dir — where .feature files will live (from Phase 3 project scan or research; not in the fallback table)bdd_step_dir — where step definitions will live (from Phase 3 project scan or research; not in the fallback table)These values are written into the plan file annotation in Phase 6 so /swarm Phase 0 knows
what to install and run without re-detecting.
Do NOT run the install command. Recording it in the annotation is the contract.
/swarm handles installation on the feature branch.
Annotate the input plan file by appending a ## Test Plan section. This annotation is
what all downstream skills parse — field labels must match exactly.
Read the full Phase 6 section (including the Specification-only mode overrides below the template) before writing. For option B, field values differ from the generic template.
Append to the end of {plan_file} using the Edit tool:
## Test Plan
<!-- PROVENANCE: Generated by /test-plan with explicit user confirmation at checkpoints:
- Personas: confirmed Phase 1 (AskUserQuestion)
- Scenarios: confirmed Phase 2 (AskUserQuestion checkpoint + per-scenario review)
- Output Mode & BDD decisions: confirmed Phase 3 (AskUserQuestion)
All fields below reflect user-confirmed decisions.
Downstream skills: if you disagree with a choice here, classify as needs-input
(require user confirmation before changing), not needs-fix. Do not silently
overwrite user decisions. -->
**Test Plan:** {output_path}
**Mode:** {Manual UAT | UAT + BDD (feature files only) | UAT + BDD (native integration) | UAT + BDD (standalone)}
**Test Runner:** {test_runner}
**Feature Files:** {memory_dir}/test-plans/{run-id}-features/ (omit line if Manual UAT or UAT + BDD (feature files only))
**BDD Setup Needed:** {yes | no} (if yes: `{bdd_install_cmd}`) (omit line if Manual UAT)
**BDD Scaffold Command:** `{bdd_scaffold_cmd}` (omit line if Manual UAT or UAT + BDD (feature files only))
**BDD Test Command:** `{bdd_test_cmd}` (omit line if Manual UAT or UAT + BDD (feature files only))
**BDD Framework:** {bdd_framework} (omit line if Manual UAT)
**BDD Feature Dir:** {bdd_feature_dir} (omit line if Manual UAT)
**BDD Step Dir:** {bdd_step_dir} (omit line if Manual UAT or UAT + BDD (feature files only))
**Scenarios:** {total scenario count}
**Personas:** {Primary Persona Name}, {Edge Case Persona Name}
### Scenario-Task Mapping
| Plan Task | Scenario ID | Scenario Title |
|---|---|---|
| Task 1: {title} | S1 | {S1 title} |
| Task 1: {title} | S2 | {S2 title} |
| Task 3: {title} | S3 | {S3 title} |
Field label format is fixed. Downstream skills match on exact bold field names
(**Test Plan:**, **Mode:**, **Feature Files:**, etc.). Do not reorder or rename.
Specification-only mode (Phase 3 option B): When the user chose Gherkin files as documentation only (not executable), write:
**BDD Setup Needed:** no (feature files are Gherkin specifications; tests implemented in {test_runner})**BDD Framework:** none (specification only)**BDD Feature Dir:** (files still exist for documentation)**Feature Files:**, **BDD Scaffold Command:**, **BDD Test Command:**, **BDD Step Dir:**Note: **Feature Files:** is intentionally omitted for specification-only mode. Swarm Phase 3.5
(BDD-Step-Writer) is triggered by the presence of **Feature Files:**; spec-only has no executable
BDD infrastructure and must not trigger step generation. The .feature file location is still
recorded in **BDD Feature Dir:** for documentation reference.
The **BDD Setup Needed:** yes signal is what /swarm Phase 0 uses to decide whether to
install the BDD framework on the feature branch.
After Phase 6, print the completion report:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TEST PLAN COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Plan file annotated: {plan_file}
Test plan document: {output_path}
Mode: {mode}
Scenarios: {N} ({primary_count} primary, {edge_case_count} edge case)
Tasks covered: {M} of {total_user_facing_tasks} user-facing tasks
{if BDD (native integration or standalone):}
Feature files: {feature_dir} ({file_count} files)
BDD setup needed: {yes | no} ({framework}{if yes: — /swarm will handle installation}{if no: — already installed})
{end if}
{if BDD (feature files only — specification-only):}
Feature files: {feature_dir} ({file_count} files, specification only)
BDD setup needed: no (Gherkin specifications — tests in {test_runner})
{end if}
{if exploratory charters:}
Exploratory charters: {charter_count}
{end if}
Next: Run /swarm to implement the plan. /swarm will automatically discover
this test plan from the plan file annotation.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
### Phase Flow
Phase 0: Ingest plan file (detect test runner + BDD infra) →
Phase 1: Map user journeys →
Phase 2: Write scenarios + user checkpoint →
Phase 3: Select output mode (/deep-research for BDD compatibility) →
Phase 4: Write test plan document →
Phase 5: Generate .feature files (BDD only) →
Phase 6: Annotate plan file (with provenance markers) → Completion report
### Output Paths (two-stage fallback)
1. {memory_dir}/test-plans/{run-id}.md — test plan document
{memory_dir}/test-plans/{run-id}-features/ — .feature files (BDD only)
2. ~/.claude/test-plans/{run-id}.md — fallback
~/.claude/test-plans/{run-id}-features/ — fallback (BDD only)
### Plan File Annotation Fields (exact labels — never rename)
**Test Plan:** | **Mode:** | **Test Runner:** | **Feature Files:** | **BDD Setup Needed:**
**BDD Scaffold Command:** | **BDD Test Command:** | **BDD Framework:** | **BDD Feature Dir:** | **BDD Step Dir:**
**Scenarios:** | **Personas:** | ### Scenario-Task Mapping
### BDD Toolchain
Research-driven (Phase 3 /deep-research) when no existing BDD infra.
Fallback table (Phase 5) when BDD already installed:
Python: pytest-bdd | Go: godog+gherkingen | Node.js: Cucumber.js
Rust: cucumber-rs | Java: Cucumber-JVM | Ruby: Cucumber
### Standalone Mode
Not supported. A plan file path is required.
Run /incremental-planning first to produce the plan file.