From autonomous-sdlc
Generates pytest-bdd scaffolding from acceptance criteria: .feature files, step definitions, conftest.py, and tests/bdd/ directory structure. Use after bdd-spec or with existing Given/When/Then scenarios.
npx claudepluginhub joshuaoliphant/claude-plugins --plugin autonomous-sdlcThis skill uses the workspace's default tool permissions.
Take structured acceptance criteria (Given/When/Then) and produce runnable pytest-bdd scaffolding: `.feature` files, step definition stubs, pytest configuration, and directory structure. This skill generates files — it is not conversational.
Provides BDD patterns using Given-When-Then, Gherkin feature files, scenario outlines, and step definitions for business-readable tests and specifications.
Generates Gherkin BDD .feature files from spec acceptance criteria. Produces one file per demoable unit alongside specs; optionally creates cw-testing task stubs.
Guides BDD patterns for acceptance test design: Given-When-Then structure, scenario rules, pytest-bdd implementation, Outside-In TDD workflow, anti-patterns.
Share bugs, ideas, or general feedback.
Take structured acceptance criteria (Given/When/Then) and produce runnable pytest-bdd scaffolding: .feature files, step definition stubs, pytest configuration, and directory structure. This skill generates files — it is not conversational.
uv add, mkdir, pytest --collect-onlyuv add --dev pytest-bddbdd-spec output, a plan document, or directly from the userBefore generating any files, verify acceptance criteria exist. Check in this order:
bdd-spec output in the current conversation (structured AC blocks)specs/{feature-slug}-plan.md → extract ## Acceptance CriteriaSTOP if no acceptance criteria are found. Do not generate generic or placeholder feature files. Report: "No acceptance criteria found." Suggest: "Run
bdd-specto co-author acceptance criteria first."
Every scenario must trace back to a specific acceptance criterion.
tests/bdd/
├── conftest.py # Shared BDD fixtures
├── features/
│ └── {feature}.feature # Gherkin feature files
└── steps/
├── conftest.py # Shared step definitions (cross-feature)
└── test_{feature}.py # Feature-specific step definitions
| Acceptance Criteria | Gherkin |
|---|---|
| AC heading | Feature name |
| AC-N blocks | Individual Scenarios |
| Bold Given/When/Then | Gherkin Given/When/Then keywords |
and continuations | Gherkin And keyword |
| Edge case tables | Scenario Outline with Examples |
| Shared preconditions across all ACs | Background |
Tag each scenario with @ac-N for traceability.
tdd-workflow) — Inner loop. Unit-level tests verifying component internals.verification-stack) — uv run pytest tests/ -x auto-discovers BDD tests. No config changes needed.For detailed pytest-bdd syntax, parser types, and step decorator patterns:
→ references/pytest_bdd_reference.md
python ${CLAUDE_PLUGIN_ROOT}/scripts/feedback_manager.py autonomous-sdlc show-feedback
Apply relevant feedback: test_generation, bdd_workflow, general.
Run the prerequisite guard. Extract each AC-N block, noting feature name, AC numbers, Given/When/Then content, edge cases, and Scenario Outline tables.
uv add --dev pytest-bdd
Add to pyproject.toml if not present:
[tool.pytest.ini_options]
markers = ["bdd: BDD acceptance tests"]
[tool.pytest-bdd]
bdd_features_base_dir = "tests/bdd/features/"
mkdir -p tests/bdd/features tests/bdd/steps
Create __init__.py files in each directory if they don't exist.
One .feature file per feature:
# ABOUTME: BDD feature file for {feature_name}
# ABOUTME: Generated from acceptance criteria — scenarios map to AC-N numbers
Feature: {Feature Name}
{One-line description}
Background:
Given {shared precondition}
@ac-1
Scenario: {AC-1 title}
Given {precondition}
When {action}
Then {outcome}
And {additional outcome}
@ac-2
Scenario Outline: {AC-2 title — parameterized}
Given {precondition}
When the user submits <input>
Then the system displays <error_message>
Examples:
| input | error_message |
| empty email | Email is required |
One test file per feature in tests/bdd/steps/:
# ABOUTME: Step definitions for {feature_name} BDD tests
# ABOUTME: Stubs generated from acceptance criteria — implement TODO markers
import pytest
from pytest_bdd import scenarios, given, when, then, parsers
scenarios("../features/{feature}.feature")
@given(parsers.parse("a registered user with email {email}"), target_fixture="user")
def given_registered_user(email):
"""Set up a registered user."""
# TODO: Implement
raise NotImplementedError("Implement this step")
@when(parsers.parse("the user submits the login form with {credentials}"))
def when_user_submits_login(credentials, user):
"""Perform the login action."""
# TODO: Implement
raise NotImplementedError("Implement this step")
@then(parsers.parse("the system displays {message}"))
def then_system_displays(message):
"""Verify displayed message."""
# TODO: Implement
raise NotImplementedError("Implement this step")
Notes:
scenarios() auto-discovers all scenarios from the feature fileparsers.parse() for parameterized stepstarget_fixture to inject state from @given into @when/@thenNotImplementedError makes failures explicit, not silentCreate tests/bdd/conftest.py and tests/bdd/steps/conftest.py with common fixtures and shared steps.
# Verify all scenarios are discovered
uv run pytest tests/bdd/ --collect-only
# Run BDD tests (stubs will fail at TODO markers — expected)
uv run pytest tests/bdd/ -x
Collection should succeed with zero errors. Execution failures at TODO stubs are expected and correct.
| File | Purpose |
|---|---|
tests/bdd/features/{feature}.feature | Gherkin scenarios tagged with @ac-N |
tests/bdd/steps/test_{feature}.py | Step definition stubs with NotImplementedError |
tests/bdd/conftest.py | Shared BDD fixtures |
tests/bdd/steps/conftest.py | Shared step definitions (cross-feature) |
Stubs are ready for implementation via TDD inner loop (red-green-refactor on each step).