From python-engineering
Guides adding new features to Python projects through discovery, planning, TDD implementation with tests, linting, and verification phases.
npx claudepluginhub jamie-bitflight/claude_skills --plugin python-engineeringThis skill uses the workspace's default tool permissions.
<feature_description>$ARGUMENTS</feature_description>
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
<feature_description>$ARGUMENTS</feature_description>
The model guides feature development through discovery, planning, implementation, and verification phases.
<feature_description/>
CHECK:
- [ ] pyproject.toml exists and has project configuration
- [ ] src/ or packages/ directory structure
- [ ] tests/ directory with existing test patterns
- [ ] Linting and type-check configuration (ruff; ty and/or mypy per project — see python3-standards)
- [ ] Existing patterns for similar features
Determine where the feature fits:
Create a clear specification:
## Feature: [Name]
**Purpose**: [One sentence describing what this enables]
**User Story**: As a [user type], I want [capability] so that [benefit].
**Acceptance Criteria**:
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
**Files to Create/Modify**:
- `src/module/new_feature.py` - Main implementation
- `tests/test_new_feature.py` - Test suite
- `src/module/__init__.py` - Export new functionality
**Dependencies**:
- Internal: [existing modules to import]
- External: [new packages if any]
<moscow_framework>
Categorize all requirements using MoSCoW:
| Priority | Meaning | Criteria |
|---|---|---|
| P0 (Must Have) | Non-negotiable for v1 | Feature is broken without this |
| P1 (Should Have) | Important, committed follow-up | High value, but v1 works without it |
| P2 (Could Have) | Desirable if time permits | Nice-to-have enhancements |
| Won't Have | Explicitly deferred | Out of scope for this release |
Discipline Check: If everything is P0, nothing is P0. Re-evaluate.
Example:
### Requirements by Priority
**P0 (Must Have)**:
- [ ] Parse CSV input with header detection
- [ ] Output formatted report to stdout
- [ ] Handle malformed rows with error message
**P1 (Should Have)**:
- [ ] Support custom delimiter (--delimiter)
- [ ] Progress indicator for large files
**P2 (Could Have)**:
- [ ] JSON output format option
- [ ] Column filtering
**Won't Have (This Release)**:
- Excel format support (separate feature)
- Database export (requires new dependency)
</moscow_framework>
<acceptance_criteria_patterns>
Use ONE of these formats for testable acceptance criteria:
Format 1: Given/When/Then (BDD)
Given [precondition]
When [user action]
Then [expected outcome]
Example:
Given a CSV file with 1000 rows
When the user runs `parse report.csv`
Then the output shows all rows within 2 seconds
And no memory warnings are logged
Format 2: Checklist with Specifics
- [ ] Command `parse --help` shows usage with examples
- [ ] Empty file input returns exit code 1 with message "Empty file"
- [ ] Unicode characters in data are preserved in output
- [ ] Ctrl+C during processing exits cleanly (no stack trace)
Anti-Patterns to Avoid:
| Anti-Pattern | Problem | Better |
|---|---|---|
| "Should be fast" | Unmeasurable | "Completes in <2s for 10K rows" |
| "Handle errors gracefully" | Vague | "Invalid input returns exit code 1 with descriptive message" |
| "User-friendly output" | Subjective | "Output uses Rich table formatting with headers" |
</acceptance_criteria_patterns>
Define the public API before implementation:
# Define function signatures and docstrings
def new_feature(
input_data: InputType,
*,
option: str = "default",
) -> ResultType:
"""Process input data with new feature capability.
Args:
input_data: The data to process
option: Configuration option
Returns:
Processed result
Raises:
ValidationError: If input_data is invalid
"""
...
import pytest
from pytest_mock import MockerFixture
class TestNewFeature:
"""Tests for new_feature functionality."""
def test_basic_operation(self) -> None:
"""Test basic feature operation with valid input."""
# Arrange
input_data = create_valid_input()
# Act
result = new_feature(input_data)
# Assert
assert result.status == "success"
def test_handles_invalid_input(self) -> None:
"""Test feature raises error for invalid input."""
with pytest.raises(ValidationError, match="Invalid input"):
new_feature(invalid_input)
def test_option_affects_behavior(self) -> None:
"""Test that option parameter changes processing."""
result_default = new_feature(data, option="default")
result_custom = new_feature(data, option="custom")
assert result_default != result_custom
- [ ] All functions have complete type hints
- [ ] Docstrings follow Google style
- [ ] Error handling uses specific exceptions
- [ ] No hardcoded values (use constants or config)
- [ ] Follows existing project patterns
# src/module/__init__.py
from .new_feature import new_feature, ResultType
__all__ = [
"new_feature",
"ResultType",
# ... existing exports
]
@app.command()
def feature_command(
input_file: Annotated[Path, typer.Argument(help="Input file")],
) -> None:
"""Run new feature on input file."""
result = new_feature(load_data(input_file))
console.print(f"Result: {result}")
# Linting
uv run ruff check src/ tests/
uv run ruff format --check src/ tests/
# Type checking — match hooks/CI (ty vs mypy); do not use mypy only because [tool.mypy] exists
uv run ty check src/ tests/
# If hooks/CI run mypy:
# uv run mypy src/ tests/
# Tests with coverage
uv run pytest tests/ --cov=src --cov-report=term-missing
<success_metrics_framework>
Define measurable success criteria before implementation:
Leading Indicators (Observable in days-weeks):
| Metric | Target | How to Measure |
|---|---|---|
| Test pass rate | 100% | pytest --tb=short |
| Type coverage | 100% | ty check or project mypy command |
| Code coverage | ≥80% new code | pytest --cov |
| Command startup | <500ms | time uv run <cmd> --help |
Lagging Indicators (Observable in weeks-months):
| Metric | Target | How to Measure |
|---|---|---|
| User adoption | N users/week | Usage logs or feedback |
| Error rate | <1% of invocations | Error logs |
| Support tickets | Reduction from baseline | Issue tracker |
For CLI Features:
# Performance baseline
hyperfine 'uv run <command> <args>' --warmup 3
# Memory usage
/usr/bin/time -v uv run <command> <args> 2>&1 | grep "Maximum resident"
Evaluation Window: Specify when metrics will be reviewed (e.g., "1 week post-merge", "after 100 invocations").
</success_metrics_framework>
Update relevant documentation:
Request: "Add CSV export functionality to the report module"
Project structure:
- src/reports/generator.py - existing report generation
- src/reports/formats/ - existing format handlers
- tests/reports/ - existing report tests
Pattern: Format handlers inherit from BaseFormatter
## Feature: CSV Export
**Purpose**: Export reports in CSV format
**Files**:
- `src/reports/formats/csv_formatter.py` - CSV implementation
- `tests/reports/formats/test_csv_formatter.py` - Tests
**Interface**:
class CsvFormatter(BaseFormatter):
def format(self, report: Report) -> str: ...
--format csvConsult ../python3-core/references/python3-standards.md when verifying the feature against shared plugin standards. Ensure:
ty vs mypy); use uv run ty check when ty is what the repo runs; use uv run mypy only when mypy is actually invoked there (not merely because [tool.mypy] exists)