Trigger for ANY task involving Python code, Python tooling, or Python-ecosystem libraries — even when 'Python' isn't explicitly mentioned. If the query references any of these, use this skill: pyproject.toml, hatchling, setuptools, ruff, pyright, mypy, pytest, FastMCP, MCP server, asyncio, TaskGroup, Typer, Click, argparse, dataclass, Pydantic, pathlib, httpx, Polars, Pandas, NumPy, FastAPI, Django, Flask, SQLAlchemy, Alembic, uv, pip, venv, cProfile, py-spy, PDF/Excel/DOCX automation, .py files. Covers writing, debugging, configuring, and reviewing Python. Includes build config, linting config, type checking config, packaging, async concurrency, CLI apps, data scripts, web frameworks, ORM, profiling, and MCP server development. Do NOT use for: ML training/fine-tuning/RAG/vector DBs (use ml-data-engineering), REST/GraphQL API design (use backend-data), test strategy/methodology (use testing-quality), React/TypeScript/frontend (use web-frontend).
From george-setupnpx claudepluginhub george11642/george-plugins --plugin george-setupThis skill uses the workspace's default tool permissions.
eval_set.jsonoptimization_results/output_20260305_221429.jsonreferences/async.mdreferences/cli-development.mdreferences/concurrency-decisions.mdreferences/data-processing.mdreferences/database-orm.mdreferences/file-processing.mdreferences/frameworks.mdreferences/linting.mdreferences/mcp-servers.mdreferences/packaging.mdreferences/performance.mdreferences/testing.mdreferences/types.mdreferences/uv.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Master skill for Python: testing, packaging, async, performance, types, linting, debugging, file processing, and MCP server development. Philosophy: type-safe, tested, profiled-before-optimized code that uses modern tooling (uv, ruff, pyright) over legacy (pip, flake8, pylint).
| Task | Reference |
|---|---|
| Tests, fixtures, mocking, coverage | references/testing.md |
| Packaging, pyproject.toml, publishing | references/packaging.md |
| Async, concurrency, event loops | references/async.md |
| uv, dependency management, lockfiles | references/uv.md |
| Profiling, memory, CPU optimization | references/performance.md |
| Types, Pydantic, mypy/pyright config | references/types.md |
| Linting (ruff), formatting, CI checks | references/linting.md |
| PDF, Excel, Word / DOCX file automation | references/file-processing.md |
| MCP servers, FastMCP, tool registration | references/mcp-servers.md |
| FastAPI, Django, Flask web frameworks | references/frameworks.md |
| SQLAlchemy ORM, async DB, Alembic migrations | references/database-orm.md |
| asyncio TaskGroup, threading, multiprocessing | references/concurrency-decisions.md |
| Pandas, Polars, NumPy, data validation | references/data-processing.md |
| Click, Typer, argparse CLI tools | references/cli-development.md |
#!/usr/bin/env python3
"""One-line description of what this script does."""
from __future__ import annotations
import argparse
import logging
import sys
from pathlib import Path
log = logging.getLogger(__name__)
def main(argv: list[str] | None = None) -> int:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("input", type=Path, help="Input file path")
parser.add_argument("-v", "--verbose", action="store_true")
args = parser.parse_args(argv)
logging.basicConfig(level=logging.DEBUG if args.verbose else logging.INFO)
# ... logic here ...
return 0
if __name__ == "__main__":
sys.exit(main())
Why from __future__ import annotations: defers type evaluation, enables X | Y union syntax on 3.9+, avoids forward-reference strings.
| Need | Pattern |
|---|---|
| Union type | str | None (with __future__ annotations) |
| Dataclass | @dataclass(frozen=True, slots=True) -- immutable + memory efficient |
| Enum | class Color(StrEnum): (3.11+) or class Color(str, Enum): |
| Context manager | @contextmanager + yield from contextlib |
| Temp files | with tempfile.TemporaryDirectory() as d: |
| Path ops | Path("a") / "b" / "c.txt" -- never os.path.join |
| Dict merge | {**a, **b} or a | b (3.9+) |
| Env vars | os.environ["KEY"] (crash if missing) or .get("KEY", default) |
| CLI tool | click for complex CLIs, argparse for stdlib-only |
| HTTP client | httpx (async-native) over requests |
# Fixture with teardown
@pytest.fixture
def db():
conn = connect()
yield conn
conn.close()
# Parametrize for edge cases
@pytest.mark.parametrize("input,expected", [("a", 1), ("", 0)])
def test_length(input, expected):
assert len(input) == expected
# Mock at import boundary
with patch("myapp.services.requests.get") as mock:
mock.return_value.json.return_value = {"id": 1}
Prefer static type checking -- it catches bugs before tests run.
pyproject.tomlAny unless wrapping truly dynamic APIs# Pydantic v2 model
from pydantic import BaseModel, Field, field_validator
class UserCreate(BaseModel):
name: str = Field(min_length=1, max_length=100)
email: str
age: int = Field(ge=0, le=150)
@field_validator("email")
@classmethod
def must_be_valid_email(cls, v: str) -> str:
if "@" not in v:
raise ValueError("invalid email")
return v.lower()
See references/types.md for pyright/mypy config, generics, protocols, and TypeVar patterns.
Use ruff -- it replaces flake8, isort, pyflakes, pycodestyle, and most pylint checks in one fast tool.
# pyproject.toml
[tool.ruff]
target-version = "py311"
line-length = 88
[tool.ruff.lint]
select = ["E", "F", "W", "I", "UP", "B", "SIM", "RUF"]
# E=pycodestyle, F=pyflakes, I=isort, UP=pyupgrade, B=bugbear, SIM=simplify
[tool.ruff.format]
quote-style = "double"
Why ruff over flake8+isort+black: 10-100x faster, single config, auto-fixes most issues. Run ruff check --fix . && ruff format .
See references/linting.md for rule selection, per-file ignores, and CI setup.
breakpoint() (3.7+) drops into pdb. Set PYTHONBREAKPOINT=0 to disable in prodpython -m pdb script.py for post-mortem on crashn(ext), s(tep into), c(ontinue), p expr, pp obj, l(ist), w(here), u(p), d(own)debugpy for remote/VSCode debugging: python -m debugpy --listen 5678 --wait-for-client script.pyicecream for quick print-debugging: from icecream import ic; ic(variable) -- shows expression + valuetraceback.print_exc() in except blocks for full stack traces without re-raisingWhy breakpoint() over import pdb; pdb.set_trace(): one call, configurable via env var, works with any debugger.
src/my_package/ prevents accidental imports of uninstalled codeThese atomic skills provide deeper specialization within the Python domain:
| Skill | Use when |
|---|---|
testing | pytest fixtures, mocking, coverage configuration |
packaging | pyproject.toml, publishing to PyPI, build systems |
async | asyncio patterns, event loops, async generators |
performance | cProfile, py-spy, memory profiling, optimization |
types | mypy/pyright config, generics, protocols, TypeVar |
linting | ruff rules, formatting, CI integration |
file-processing | PDF, Excel, Word automation |
mcp-servers | FastMCP, tool registration, MCP protocol |
frameworks | FastAPI, Django, Flask patterns |
database-orm | SQLAlchemy, async DB, Alembic migrations |
concurrency | threading, multiprocessing, TaskGroup |
data-processing | pandas, polars, NumPy, data validation |
cli-development | Click, Typer, argparse CLI tools |