Develop features using strict Test-Driven Development with parallel test specialists
Develop features using strict TDD with parallel test specialists who create comprehensive test suites before any implementation. Use this when building new features to ensure 100% test-first compliance and proper test organization in centralized directories.
/plugin marketplace add aws-solutions-library-samples/guidance-for-claude-code-with-amazon-bedrock/plugin install tdd-workflow@aws-claude-code-plugins[feature-description]You are a TDD expert. When developing features, ALWAYS write tests first, verify they fail, then implement minimal code to pass.
$ARGUMENTS
If no feature description was provided above, ask the user: "What feature would you like to implement using TDD?"
Before writing any tests, establish proper test placement:
Detect existing structure:
# Look for test directories and configuration
find . -type d -name "test*" -maxdepth 3
grep -r "testpaths\|test_suite" pyproject.toml setup.py pytest.ini 2>/dev/null || true
Create centralized test structure:
tests/
├── unit/ # Fast isolated tests
│ ├── test_models.py # Database models
│ ├── test_services.py # Business logic
│ └── test_utils.py # Utility functions
├── integration/ # Component interaction tests
│ ├── test_api_endpoints.py
│ └── test_database_operations.py
├── e2e/ # End-to-end system tests
│ └── test_user_workflows.py
├── fixtures/ # Test data and fixtures
│ ├── sample_data.json
│ └── mock_responses.py
└── conftest.py # Shared test configuration
Feature test placement guidelines:
tests/unit/test_[module_name].pytests/integration/test_[feature_name]_api.pytests/e2e/test_[feature_name]_workflow.pytests/performance/test_[feature_name]_perf.pyDeploy concurrent test specialists: @test-generator @qa-engineer @performance-profiler @security-reviewer
All subagents work in parallel to create comprehensive test coverage BEFORE implementation:
# ALWAYS start with tests that fail
def test_feature_core_functionality():
"""Test the main happy path"""
# This MUST fail initially
result = new_feature(valid_input)
assert result == expected_output
def test_feature_handles_invalid_input():
"""Test error handling"""
with pytest.raises(ValueError):
new_feature(invalid_input)
def test_feature_edge_cases():
"""Test boundary conditions"""
assert new_feature(edge_case_1) == expected_1
assert new_feature(edge_case_2) == expected_2
# Run tests to confirm they fail - use proper test path
pytest tests/unit/test_new_feature.py -v
# Expected output:
# tests/unit/test_new_feature.py::test_feature_core_functionality FAILED
# tests/unit/test_new_feature.py::test_feature_handles_invalid_input FAILED
# tests/unit/test_new_feature.py::test_feature_edge_cases FAILED
# Write ONLY enough code to pass tests
def new_feature(input_data):
"""Minimal implementation to pass tests"""
if not is_valid(input_data):
raise ValueError("Invalid input")
# Just enough logic to pass
return process(input_data)
# After tests pass, improve code quality
class FeatureHandler:
"""Refactored with better design"""
def __init__(self):
self.validator = InputValidator()
self.processor = DataProcessor()
def execute(self, input_data):
"""Clean, maintainable implementation"""
self.validator.validate(input_data)
return self.processor.process(input_data)
def test_feature_produces_correct_output():
"""Core functionality works correctly"""
pass
def test_feature_handles_various_inputs():
"""Works with different input types"""
pass
def test_feature_handles_null_input():
"""Gracefully handles None"""
pass
def test_feature_handles_malformed_data():
"""Handles bad data gracefully"""
pass
def test_feature_with_empty_input():
"""Handles empty collections"""
pass
def test_feature_with_maximum_values():
"""Handles boundary conditions"""
pass
def test_feature_performance():
"""Completes within acceptable time"""
import time
start = time.time()
new_feature(large_dataset)
assert time.time() - start < 1.0 # Under 1 second
def test_feature_integrates_with_system():
"""Works with rest of system"""
pass
Analyze Project Structure
# Detect existing test organization
find . -name "conftest.py" -o -name "pytest.ini" -o -name "pyproject.toml"
ls -la tests/ test/ src/tests/ 2>/dev/null || echo "No existing test dirs found"
# Check current test patterns
find . -name "test_*.py" -o -name "*_test.py" | head -3
Understand Requirements
Write Comprehensive Test Suite
Verify All Tests Fail
Implement Incrementally
Refactor Continuously
# Basic feature development
/tdd-feature "Add email validation to user registration"
# Tests will be placed in: tests/unit/test_user_validation.py
# Complex feature with specific requirements
/tdd-feature "Implement shopping cart with discount rules, tax calculation, and inventory checking"
# Tests will be placed in: tests/unit/test_shopping_cart.py, tests/integration/test_cart_service.py
# API endpoint development
/tdd-feature "Create REST API endpoint for user profile updates with validation"
# Tests will be placed in: tests/integration/test_user_profile_api.py
# Background job implementation
/tdd-feature "Build async task processor for image resizing"
# Tests will be placed in: tests/unit/test_image_processor.py, tests/integration/test_async_tasks.py
The command will produce:
Test Suite First
Test Execution Report
Implementation
Coverage Report
Refactoring Suggestions
Track TDD success: