npx claudepluginhub parhumm/jaan-to --plugin jaan-toThis skill is limited to using the following tools:
> Produce runnable Vitest unit tests and Playwright E2E specs from BDD test cases and scaffold code.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Produce runnable Vitest unit tests and Playwright E2E specs from BDD test cases and scaffold code.
$JAAN_CONTEXT_DIR/tech.md - Tech stack context (CRITICAL -- determines test frameworks, runners, patterns)
#current-stack, #frameworks, #constraints, #patterns$JAAN_CONTEXT_DIR/config.md - Project configuration$JAAN_TEMPLATES_DIR/jaan-to-qa-test-generate.template.md - Output template$JAAN_LEARN_DIR/jaan-to-qa-test-generate.learn.md - Past lessons (loaded in Pre-Execution)$JAAN_OUTPUTS_DIR/research/71-qa-bdd-gherkin-test-code-generation.md - playwright-bdd, jest-cucumber, tag routing, test data factories, MSW, Vitest workspaces, CI execution${CLAUDE_PLUGIN_ROOT}/docs/extending/language-protocol.md - Language resolution protocolUpstream Artifacts: $ARGUMENTS
Accepts 1-3 file paths or descriptions:
/jaan-to:qa-test-cases)/jaan-to:backend-scaffold or /jaan-to:frontend-scaffold)/jaan-to:backend-api-contract) for MSW handler generation and API assertion data/jaan-to:qa-test-mutate and generates targeted tests to kill surviving mutantsIMPORTANT: The input above is your starting point. Determine mode and proceed accordingly.
MANDATORY — Read and execute ALL steps in: ${CLAUDE_PLUGIN_ROOT}/docs/extending/pre-execution-protocol.md
Skill name: qa-test-generate
Execute: Step 0 (Init Guard) → A (Load Lessons) → B (Resolve Template) → C (Offer Template Seeding)
Also read the comprehensive research document:
$JAAN_OUTPUTS_DIR/research/71-qa-bdd-gherkin-test-code-generation.md
This provides:
Also read context files if available:
$JAAN_CONTEXT_DIR/tech.md -- Know the tech stack for framework-specific test generationIf files do not exist, continue without them.
Read and apply language protocol: ${CLAUDE_PLUGIN_ROOT}/docs/extending/language-protocol.md
Override field for this skill: language_qa-test-generate
Language exception: Generated code output (variable names, code blocks, test files, config files) is NOT affected by this setting and remains in the project's programming language.
ultrathink
Use extended reasoning for:
Check if $ARGUMENTS contains --from-mutants:
Read the survivors JSON file from the provided path. Validate against handoff contract:
{
"schema_version": "1.0",
"tool": "{framework}",
"mutation_score": {number or null},
"survivors": [
{
"file": "src/services/auth.ts",
"line": 42,
"original": "return balance > 0;",
"mutated": "return balance >= 0;",
"mutator": "ConditionalBoundary"
}
]
}
For each survivor entry:
survivor.file, locate survivor.linesurvivor.lineoriginal version produces correctlymutated version were substitutedtest("{mutator} at {file}:{line} -- {behavior description}")Testing pyramid check: After generating mutation-targeted tests, validate ratio:
Output: Same folder structure as standard mode, but test files are prefixed with mutation- (e.g., mutation-auth-service.test.ts).
After generating mutation-targeted tests, skip to Step 10 (Quality Check).
For JS/TS projects: generate optional stryker.config.mjs companion if not already present in project root.
For PHP projects: generate optional infection.json5 companion if not already present in project root.
These configs are placed in the output config/ subfolder as suggestions, not auto-applied.
For each provided path:
qa-test-cases (REQUIRED in standard mode):
Feature: blocks with @tagsScenario: extracting Given/When/Then stepsbackend-scaffold OR frontend-scaffold (REQUIRED):
backend-api-contract (optional):
backend-service-implement (optional):
Present input summary:
INPUT SUMMARY
-------------------------------------------------------------
Sources Found: {list of found artifacts}
Sources Missing: {list with impact assessment}
BDD Scenarios: {total count}
@smoke: {count}
@unit: {count}
@integration: {count}
@e2e: {count}
@boundary: {count}
@edge-case: {count}
Testable Code Units:
Services: {count from scaffold}
Routes/Handlers: {count from scaffold}
Components/Hooks: {count if frontend}
Schemas: {count}
API Endpoints: {count from contract, or "N/A"}
Read $JAAN_CONTEXT_DIR/tech.md for test framework detection:
#frameworks (default: Vitest for unit/integration, Playwright for E2E)If tech.md missing or incomplete, use AskUserQuestion:
Route BDD scenarios to test tiers based on tag taxonomy.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/qa-test-generate-reference.mdsection "Detailed Tag Routing Map" for the full tag-to-tier routing table with target descriptions.
Calculate totals and present plan:
TEST GENERATION PLAN
-------------------------------------------------------------
Config Files:
- vitest.config.ts (workspace with unit + integration)
- playwright.config.ts (with playwright-bdd)
- test/setup/unit.ts (MSW server setup)
- test/setup/integration.ts (DB + MSW setup)
- test/mocks/server.ts (MSW server instance)
- test/mocks/handlers.ts (auto-generated from API contract)
Test Data:
- test/factories/{entity}.factory.ts ({count} factories)
- test/fixtures/db-seed.ts (DB seeding scenarios)
- test/utils/test-utils.ts (shared test helpers)
Unit Tests ({count} files):
{list of service.test.ts and hook.test.ts files}
Integration Tests ({count} files):
{list of integration.test.ts files}
E2E Tests ({count} files):
{list of Playwright .spec.ts files per user flow}
Coverage Targets:
- Unit: 80% line, 70% branch
- Integration: 60% line
- E2E: 100% of acceptance criteria scenarios
- BDD: All Given/When/Then steps mapped to assertions
Show complete plan before generating:
FINAL CONFIRMATION
-------------------------------------------------------------
Source Artifacts:
- qa-test-cases: {path}
- scaffold: {path}
- api-contract: {path or "N/A"}
Test Framework Stack:
- Unit/Integration: {Vitest/Jest}
- E2E: {Playwright/Cypress}
- BDD Binding: {jest-cucumber adapted for Vitest / playwright-bdd}
- Mocking: {MSW / nock}
- Factories: {Fishery + @anatine/zod-mock / faker.js}
Output Folder: $JAAN_OUTPUTS_DIR/qa/test-generate/{id}-{slug}/
Total Files: {count}
Files to Generate:
Config: {count} files (vitest.config.ts, playwright.config.ts, setup files)
Factories: {count} files (test data factories + db-seed)
Unit Tests: {count} files ({total_scenarios} scenarios)
Integration: {count} files ({total_scenarios} scenarios)
E2E Tests: {count} files ({total_scenarios} scenarios)
Utilities: {count} files (test-utils, msw-handlers)
Use AskUserQuestion:
Do NOT proceed to Phase 2 without explicit approval.
Apply these patterns to reduce generation time and token overhead:
Batch Generation: Generate 3-5 related test scenarios per reasoning pass rather than one-at-a-time. Group by service/component to share setup context and reduce prompt repetition.
Fixture Reuse: Share test data factories across test files. Generate a single {entity}.factory.ts per domain entity and import across unit/integration/E2E rather than duplicating setup code per test file.
Template-Driven Generation: Use the standardized step templates from BDD binding patterns (research Section 1) as starting points. Templates reduce prompt token overhead compared to free-form generation.
Incremental Generation: When re-running after AC changes, only regenerate tests for changed/new acceptance criteria. Preserve existing passing test files and append new ones.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/qa-test-generate-reference.mdsection "Config Generation Specifications" for Vitest workspace, Playwright, and setup file configurations.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/qa-test-generate-reference.mdsection "Test Data Layer Patterns" for factory, MSW handler, DB seed, and test utility generation patterns.
For each BDD scenario tagged @unit, @smoke (unit portion), @boundary, @negative (unit portion):
Map each BDD scenario to a Vitest describe/it block using jest-cucumber binding pattern adapted for Vitest.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/qa-test-generate-reference.mdsection "BDD Binding Code Templates" for the feature-scoped step definition code template.
For each service function in scaffold:
vi.mock()expect() assertionsIf frontend-scaffold provided:
renderHook from @testing-library/reactrender from @testing-library/reactFor each BDD scenario tagged @integration, @api:
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/qa-test-generate-reference.mdsection "Integration Test Patterns" for API and service integration test generation patterns.
For each BDD scenario tagged @e2e, @smoke (E2E portion), @mobile:
Generate step definitions using playwright-bdd's createBdd pattern with Given/When/Then mapped to Playwright page actions.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/qa-test-generate-reference.mdsection "Playwright BDD Step Templates" for the playwright-bdd step definition code template.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/qa-test-generate-reference.mdsection "E2E Page Object Patterns" for page object generation and fixture composition patterns.
Before preview, validate generated tests against all quality criteria.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/qa-test-generate-reference.mdsection "Quality Check Checklist" for the full validation checklist (completeness, test data, configuration, code quality).
If any check fails, fix before proceeding.
source "${CLAUDE_PLUGIN_ROOT}/scripts/lib/id-generator.sh"
SUBDOMAIN_DIR="$JAAN_OUTPUTS_DIR/qa/test-generate"
mkdir -p "$SUBDOMAIN_DIR"
NEXT_ID=$(generate_next_id "$SUBDOMAIN_DIR")
Generate slug:
Template:
Runnable test suite for {feature_name} generated from {scenario_count} BDD scenarios
and {scaffold_type} scaffold code. Includes {unit_count} Vitest unit tests,
{integration_count} integration tests, and {e2e_count} Playwright E2E specs.
Test infrastructure: {factory_count} data factories, MSW mock handlers,
Vitest workspace config, and Playwright BDD config. Coverage targets:
80% line (unit), 60% line (integration), 100% scenario (E2E).
OUTPUT PREVIEW
-------------------------------------------------------------
ID: {NEXT_ID}
Folder: $JAAN_OUTPUTS_DIR/qa/test-generate/{NEXT_ID}-{slug}/
Files:
{NEXT_ID}-{slug}.md (Test strategy + coverage map)
config/
vitest.config.ts (Workspace config)
playwright.config.ts (BDD + projects config)
setup/unit.ts (MSW lifecycle)
setup/integration.ts (DB + MSW setup)
mocks/server.ts (MSW server)
mocks/handlers.ts (Auto-generated handlers)
test-utils.ts (Shared helpers + matchers)
unit/
{service-name}.test.ts ({n} scenarios)
...
integration/
{resource-name}.integration.test.ts ({n} scenarios)
...
e2e/
{flow-name}.spec.ts ({n} scenarios)
steps/{feature}.steps.ts (Step definitions)
steps/fixtures.ts (Page object fixtures)
pages/{page}.page.ts (Page objects)
fixtures/
factories/{entity}.factory.ts ({n} factories)
db-seed.ts (Seed scenarios)
[Show first unit test file as preview snippet]
[Show first E2E step definition as preview snippet]
Use AskUserQuestion:
If approved:
OUTPUT_FOLDER="$JAAN_OUTPUTS_DIR/qa/test-generate/${NEXT_ID}-${slug}"
mkdir -p "$OUTPUT_FOLDER"
mkdir -p "$OUTPUT_FOLDER/config/setup"
mkdir -p "$OUTPUT_FOLDER/config/mocks"
mkdir -p "$OUTPUT_FOLDER/unit"
mkdir -p "$OUTPUT_FOLDER/integration"
mkdir -p "$OUTPUT_FOLDER/e2e/steps"
mkdir -p "$OUTPUT_FOLDER/e2e/pages"
mkdir -p "$OUTPUT_FOLDER/fixtures/factories"
Path: $OUTPUT_FOLDER/${NEXT_ID}-${slug}.md
Use template from: $JAAN_TEMPLATES_DIR/jaan-to-qa-test-generate.template.md
Fill sections:
Write all configuration and setup files to $OUTPUT_FOLDER/config/
Write all unit test files to $OUTPUT_FOLDER/unit/
Write all integration test files to $OUTPUT_FOLDER/integration/
Write all E2E test files to $OUTPUT_FOLDER/e2e/
Write all factory and seed files to $OUTPUT_FOLDER/fixtures/
source "${CLAUDE_PLUGIN_ROOT}/scripts/lib/index-updater.sh"
add_to_index \
"$SUBDOMAIN_DIR/README.md" \
"$NEXT_ID" \
"${NEXT_ID}-${slug}" \
"{Feature Name} Test Suite" \
"{Executive Summary}"
TEST SUITE GENERATED
-------------------------------------------------------------
ID: {NEXT_ID}
Folder: $JAAN_OUTPUTS_DIR/qa/test-generate/{NEXT_ID}-{slug}/
Index: Updated $JAAN_OUTPUTS_DIR/qa/test-generate/README.md
Total Files: {count}
Config: {count}
Unit Tests: {count} ({scenario_count} scenarios)
Integration: {count} ({scenario_count} scenarios)
E2E Tests: {count} ({scenario_count} scenarios)
Fixtures: {count} ({factory_count} factories)
Coverage Targets:
Unit: 80% line, 70% branch
Integration: 60% line
E2E: 100% of acceptance criteria
Test suite generated successfully!
Next Steps:
- Copy test files to your project's
test/directory- Run
npm installto add test dependencies (vitest, playwright, fishery, msw, etc.)- Run
npx vitest run --workspace=unitto execute unit tests- Run
npx playwright testto execute E2E tests- Run
/jaan-to:qa-test-reviewto review test quality (when available)- See the main document for full CI integration guide
Use AskUserQuestion:
If "Learn from this": Run /jaan-to:learn-add qa-test-generate "{feedback}"
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/qa-test-generate-reference.mdsection "Key Generation Rules" for BDD-to-assertion mapping table, tag-to-tier routing, test data factory and MSW handler patterns, and anti-patterns to avoid.
tech.md detection$JAAN_OUTPUTS_DIR pathReference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/qa-test-generate-reference.mdsection "Definition of Done Checklist" for the complete checklist.