ALWAYS invoke this skill when writing or fixing implementation code for Python.
From pythonnpx claudepluginhub outcomeeng/claude --plugin pythonThis skill is limited to using the following tools:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
This skill WRITES implementation. Tests should already exist. </objective>
<mode_detection> Determine which mode you're in:
WRITE mode - Implementation doesn't exist or tests are failing
FIX mode - Implementation exists but was rejected by reviewer
/auditing-python output shows REJECT with specific issuesAlways check which mode before proceeding. </mode_detection>
<prerequisites>Before invoking this skill:
/testing-python/auditing-python-tests/spec-tree:contextualizingIf tests don't exist or aren't approved, go back to earlier steps. </prerequisites>
<write_mode_workflow>
Read the existing tests to understand:
# Read test files
cat {node_path}/tests/*.py
# Run tests to see failures
uv run --extra dev pytest {node_path}/tests/ -v
Understand:
Write minimal code that makes tests pass.
Code standards (per /standardizing-python):
# ✅ Type annotations on ALL functions
def process_order(order: Order, config: Config) -> OrderResult: ...
# ✅ Named constants for all literals
MIN_ORDER_VALUE = 10
MAX_ITEMS = 100
# ✅ Dependency injection for external dependencies
@dataclass
class Deps:
run_command: CommandRunner
uv run --extra dev pytest {node_path}/tests/ -v
All tests should pass. If any fail, fix implementation and re-run.
Clean up while keeping tests green:
# Type checking
uv run --extra dev mypy src/
# Linting
uv run --extra dev ruff check src/
# Tests one more time
uv run --extra dev pytest {node_path}/tests/ -v
All must pass before declaring complete.
</write_mode_workflow>
<fix_mode_workflow>
Find the most recent /auditing-python output. Look for:
For each rejection reason:
| Rejection Category | Fix Action |
|---|---|
| Magic values | Extract to named constants |
| Missing type annotations | Add types to all functions |
| Direct external imports | Refactor to dependency injection |
| Deep relative imports | Change to absolute imports |
Missing -> None | Add return type |
| Security issues | Fix the vulnerability (don't suppress) |
# Run tests
uv run --extra dev pytest {node_path}/tests/ -v
# Type checking
uv run --extra dev mypy src/
# Linting
uv run --extra dev ruff check src/
## Implementation Fixed
### Issues Addressed
| Issue | Location | Fix Applied |
| ----------- | --------------- | --------------------------------- |
| Magic value | handler.py:45 | Extracted to MAX_RETRIES constant |
| Missing DI | processor.py:12 | Added ProcessorDeps dataclass |
### Verification
All tests pass. Types and lint clean. Ready for re-review.
</fix_mode_workflow>
<code_patterns>
# ❌ REJECTED
def validate_score(score: int) -> bool:
return 0 <= score <= 100
# ✅ REQUIRED
MIN_SCORE = 0
MAX_SCORE = 100
def validate_score(score: int) -> bool:
return MIN_SCORE <= score <= MAX_SCORE
# ❌ REJECTED
import subprocess
def sync_files(src: str, dest: str) -> bool:
result = subprocess.run(["rsync", src, dest])
return result.returncode == 0
# ✅ REQUIRED
@dataclass
class SyncDeps:
run_command: CommandRunner
def sync_files(src: str, dest: str, deps: SyncDeps) -> bool:
returncode, _, _ = deps.run_command.run(["rsync", src, dest])
return returncode == 0
# ✅ All functions have full type annotations
def get_user(user_id: int) -> User | None:
users: list[User] = fetch_users()
return next((u for u in users if u.id == user_id), None)
</code_patterns>
<output_format>
WRITE mode output:
## Implementation Complete
### Node: {node_path}
### Files Created/Modified
| File | Action | Description |
| ---------------- | ------- | ------------- |
| `src/handler.py` | Created | Order handler |
### Verification
- Tests: ✓ Pass
- Types: ✓ Pass
- Lint: ✓ Pass
Ready for review.
FIX mode output:
## Implementation Fixed
### Issues Addressed
| Issue | Location | Fix Applied |
| ------- | ----------- | ----------- |
| {issue} | {file:line} | {fix} |
### Verification
All checks pass. Ready for re-review.
</output_format>
<success_criteria>
Task is complete when:
{node}/tests/ passmypy)ruff check)</success_criteria>