Comprehensive Python/FastAPI backend code review with optional parallel agents
Comprehensive Python/FastAPI backend code review with optional parallel agents. Detects SQLAlchemy, Pydantic-AI, and Postgres technologies, loads specialized skills, and runs linters before flagging issues to avoid false positives. Use after changes to catch bugs, security issues, and architectural problems before merging.
/plugin marketplace add anderskev/beagle/plugin install anderskev-beagle@anderskev/beagle--parallel: Spawn specialized subagents per technology areagit diff --name-only $(git merge-base HEAD main)..HEAD | grep -E '\.py$'
CRITICAL: Run project linters BEFORE flagging any style or type issues.
# Check if ruff config exists and run it
if [ -f "pyproject.toml" ] || [ -f "ruff.toml" ]; then
ruff check <changed_files>
fi
# Check if mypy config exists and run it
if [ -f "pyproject.toml" ] || [ -f "mypy.ini" ]; then
mypy <changed_files>
fi
Rules:
Why: Analysis of 24 review outcomes showed 4 false positives (17%) where reviewers flagged line-length violations that ruff check confirmed don't exist. The linter's configuration reflects intentional project decisions.
# Detect Pydantic-AI
grep -r "pydantic_ai\|@agent\.tool\|RunContext" --include="*.py" -l | head -3
# Detect SQLAlchemy
grep -r "from sqlalchemy\|Session\|relationship" --include="*.py" -l | head -3
# Detect Postgres-specific
grep -r "psycopg\|asyncpg\|JSONB\|GIN" --include="*.py" -l | head -3
# Check for test files
git diff --name-only $(git merge-base HEAD main)..HEAD | grep -E 'test.*\.py$'
Use the Skill tool to load each applicable skill (e.g., Skill(skill: "beagle:python-code-review")).
Always load:
beagle:python-code-reviewbeagle:fastapi-code-reviewConditionally load based on detection:
| Condition | Skill |
|---|---|
| Test files changed | beagle:pytest-code-review |
| Pydantic-AI detected | beagle:pydantic-ai-common-pitfalls |
| SQLAlchemy detected | beagle:sqlalchemy-code-review |
| Postgres detected | beagle:postgres-code-review |
Sequential (default):
Parallel (--parallel flag):
Task toolWhy: Analysis showed rejections where reviewers flagged "inconsistent error handling" that was intentional optimization, and "missing test coverage" for code paths that don't exist.
## Review Summary
[1-2 sentence overview of findings]
## Issues
### Critical (Blocking)
1. [FILE:LINE] ISSUE_TITLE
- Issue: Description of what's wrong
- Why: Why this matters (bug, type safety, security)
- Fix: Specific recommended fix
### Major (Should Fix)
2. [FILE:LINE] ISSUE_TITLE
- Issue: ...
- Why: ...
- Fix: ...
### Minor (Nice to Have)
N. [FILE:LINE] ISSUE_TITLE
- Issue: ...
- Why: ...
- Fix: ...
## Good Patterns
- [FILE:LINE] Pattern description (preserve this)
## Verdict
Ready: Yes | No | With fixes 1-N
Rationale: [1-2 sentences]
After fixes are applied, run:
ruff check .
mypy .
pytest
All checks must pass before approval.