python3-add-feature
Guided workflow for adding new features to Python projects. Use when planning a new feature implementation, when adding functionality with proper test coverage, or when following TDD to build features incrementally.
From python3-developmentnpx claudepluginhub jamie-bitflight/claude_skills --plugin python3-developmentThis skill uses the workspace's default tool permissions.
<feature_description>$ARGUMENTS</feature_description>
Python Feature Addition Workflow
The model guides feature development through discovery, planning, implementation, and verification phases.
Arguments
<feature_description/>
Instructions
- Understand the feature request from arguments
- Discover project context (structure, patterns, existing code)
- Plan implementation (files to create/modify, dependencies)
- Implement with tests following TDD
- Verify quality (linting, types, coverage)
Phase 1: Discovery
Gather Project Context
CHECK:
- [ ] pyproject.toml exists and has project configuration
- [ ] src/ or packages/ directory structure
- [ ] tests/ directory with existing test patterns
- [ ] Linting and type-check configuration (ruff; ty and/or mypy per project — see python3-standards)
- [ ] Existing patterns for similar features
Identify Integration Points
Determine where the feature fits:
- Which module/package owns this functionality?
- What existing classes/functions will interact with it?
- What new files need to be created?
Phase 2: Planning
Feature Specification
Create a clear specification:
## Feature: [Name]
**Purpose**: [One sentence describing what this enables]
**User Story**: As a [user type], I want [capability] so that [benefit].
**Acceptance Criteria**:
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
**Files to Create/Modify**:
- `src/module/new_feature.py` - Main implementation
- `tests/test_new_feature.py` - Test suite
- `src/module/__init__.py` - Export new functionality
**Dependencies**:
- Internal: [existing modules to import]
- External: [new packages if any]
MoSCoW Prioritization
<moscow_framework>
Categorize all requirements using MoSCoW:
| Priority | Meaning | Criteria |
|---|---|---|
| P0 (Must Have) | Non-negotiable for v1 | Feature is broken without this |
| P1 (Should Have) | Important, committed follow-up | High value, but v1 works without it |
| P2 (Could Have) | Desirable if time permits | Nice-to-have enhancements |
| Won't Have | Explicitly deferred | Out of scope for this release |
Discipline Check: If everything is P0, nothing is P0. Re-evaluate.
Example:
### Requirements by Priority
**P0 (Must Have)**:
- [ ] Parse CSV input with header detection
- [ ] Output formatted report to stdout
- [ ] Handle malformed rows with error message
**P1 (Should Have)**:
- [ ] Support custom delimiter (--delimiter)
- [ ] Progress indicator for large files
**P2 (Could Have)**:
- [ ] JSON output format option
- [ ] Column filtering
**Won't Have (This Release)**:
- Excel format support (separate feature)
- Database export (requires new dependency)
</moscow_framework>
Acceptance Criteria Formats
<acceptance_criteria_patterns>
Use ONE of these formats for testable acceptance criteria:
Format 1: Given/When/Then (BDD)
Given [precondition]
When [user action]
Then [expected outcome]
Example:
Given a CSV file with 1000 rows
When the user runs `parse report.csv`
Then the output shows all rows within 2 seconds
And no memory warnings are logged
Format 2: Checklist with Specifics
- [ ] Command `parse --help` shows usage with examples
- [ ] Empty file input returns exit code 1 with message "Empty file"
- [ ] Unicode characters in data are preserved in output
- [ ] Ctrl+C during processing exits cleanly (no stack trace)
Anti-Patterns to Avoid:
| Anti-Pattern | Problem | Better |
|---|---|---|
| "Should be fast" | Unmeasurable | "Completes in <2s for 10K rows" |
| "Handle errors gracefully" | Vague | "Invalid input returns exit code 1 with descriptive message" |
| "User-friendly output" | Subjective | "Output uses Rich table formatting with headers" |
</acceptance_criteria_patterns>
Design Interface First
Define the public API before implementation:
# Define function signatures and docstrings
def new_feature(
input_data: InputType,
*,
option: str = "default",
) -> ResultType:
"""Process input data with new feature capability.
Args:
input_data: The data to process
option: Configuration option
Returns:
Processed result
Raises:
ValidationError: If input_data is invalid
"""
...
Phase 3: Test-Driven Implementation
Write Tests First
import pytest
from pytest_mock import MockerFixture
class TestNewFeature:
"""Tests for new_feature functionality."""
def test_basic_operation(self) -> None:
"""Test basic feature operation with valid input."""
# Arrange
input_data = create_valid_input()
# Act
result = new_feature(input_data)
# Assert
assert result.status == "success"
def test_handles_invalid_input(self) -> None:
"""Test feature raises error for invalid input."""
with pytest.raises(ValidationError, match="Invalid input"):
new_feature(invalid_input)
def test_option_affects_behavior(self) -> None:
"""Test that option parameter changes processing."""
result_default = new_feature(data, option="default")
result_custom = new_feature(data, option="custom")
assert result_default != result_custom
Implement to Pass Tests
- Run tests - they should fail (red)
- Implement minimal code to pass (green)
- Refactor for quality (refactor)
- Repeat for each test case
Implementation Checklist
- [ ] All functions have complete type hints
- [ ] Docstrings follow Google style
- [ ] Error handling uses specific exceptions
- [ ] No hardcoded values (use constants or config)
- [ ] Follows existing project patterns
Phase 4: Integration
Update Module Exports
# src/module/__init__.py
from .new_feature import new_feature, ResultType
__all__ = [
"new_feature",
"ResultType",
# ... existing exports
]
Add to CLI (if applicable)
@app.command()
def feature_command(
input_file: Annotated[Path, typer.Argument(help="Input file")],
) -> None:
"""Run new feature on input file."""
result = new_feature(load_data(input_file))
console.print(f"Result: {result}")
Phase 5: Verification
Run Quality Checks
# Linting
uv run ruff check src/ tests/
uv run ruff format --check src/ tests/
# Type checking — match hooks/CI (ty vs mypy); do not use mypy only because [tool.mypy] exists
uv run ty check src/ tests/
# If hooks/CI run mypy:
# uv run mypy src/ tests/
# Tests with coverage
uv run pytest tests/ --cov=src --cov-report=term-missing
Coverage Requirements
- New feature code: 100% coverage
- Integration points: Covered by integration tests
- Overall project: Maintain or improve existing coverage
Success Metrics
<success_metrics_framework>
Define measurable success criteria before implementation:
Leading Indicators (Observable in days-weeks):
| Metric | Target | How to Measure |
|---|---|---|
| Test pass rate | 100% | pytest --tb=short |
| Type coverage | 100% | ty check or project mypy command |
| Code coverage | ≥80% new code | pytest --cov |
| Command startup | <500ms | time uv run <cmd> --help |
Lagging Indicators (Observable in weeks-months):
| Metric | Target | How to Measure |
|---|---|---|
| User adoption | N users/week | Usage logs or feedback |
| Error rate | <1% of invocations | Error logs |
| Support tickets | Reduction from baseline | Issue tracker |
For CLI Features:
# Performance baseline
hyperfine 'uv run <command> <args>' --warmup 3
# Memory usage
/usr/bin/time -v uv run <command> <args> 2>&1 | grep "Maximum resident"
Evaluation Window: Specify when metrics will be reviewed (e.g., "1 week post-merge", "after 100 invocations").
</success_metrics_framework>
Documentation
Update relevant documentation:
- README.md if user-facing feature
- API docs if public interface
- CHANGELOG.md with feature description
Example Workflow
Request: "Add CSV export functionality to the report module"
Discovery
Project structure:
- src/reports/generator.py - existing report generation
- src/reports/formats/ - existing format handlers
- tests/reports/ - existing report tests
Pattern: Format handlers inherit from BaseFormatter
Planning
## Feature: CSV Export
**Purpose**: Export reports in CSV format
**Files**:
- `src/reports/formats/csv_formatter.py` - CSV implementation
- `tests/reports/formats/test_csv_formatter.py` - Tests
**Interface**:
class CsvFormatter(BaseFormatter):
def format(self, report: Report) -> str: ...
Implementation
- Write tests for CsvFormatter
- Implement CsvFormatter class
- Register in format factory
- Add CLI option
--format csv - Run verification checks
Quality Standards
Consult ../python3-development/references/python3-standards.md when verifying the feature against shared plugin standards. Ensure:
- Type Safety: All code passes the project's type checker — match hooks/CI (
tyvsmypy); useuv run ty checkwhen ty is what the repo runs; useuv run mypyonly when mypy is actually invoked there (not merely because[tool.mypy]exists) - Linting: Zero ruff errors or warnings
- Tests: New code has 100% coverage
- Patterns: Follows existing project conventions
- Documentation: Docstrings on all public interfaces