Review implementation plans for parallelization, TDD, types, libraries, and security before execution
Reviews implementation plans for parallelization, TDD, types, libraries, and security before execution. Use this to catch issues in plans created by `superpowers:writing-plans` before you start coding.
/plugin marketplace add anderskev/beagle/plugin install anderskev-beagle@anderskev/beagleReview implementation plans created by superpowers:writing-plans before execution.
docs/plans/2025-01-15-auth-feature.md)Read the plan file and extract:
Header fields:
**Goal:** - Feature description**Architecture:** - Approach summary**Tech Stack:** - Technologies usedVerify via file patterns:
.py files → Python.ts, .tsx files → TypeScript.go files → Gopytest commands → pytestvitest, jest commands → JavaScript/TypeScript testinggo test commands → Go testingUse the Skill tool to load each applicable skill (e.g., Skill(skill: "beagle:python-code-review")).
Based on detected tech stack, load relevant skills:
| Detected | Skill |
|---|---|
| Python | beagle:python-code-review |
| FastAPI | beagle:fastapi-code-review |
| SQLAlchemy | beagle:sqlalchemy-code-review |
| PostgreSQL | beagle:postgres-code-review |
| pytest | beagle:pytest-code-review |
| React Router | beagle:react-router-code-review |
| React Flow | beagle:react-flow-code-review |
| shadcn/ui | beagle:shadcn-code-review |
| vitest | beagle:vitest-testing |
| Go | beagle:go-code-review |
| BubbleTea | beagle:bubbletea-code-review |
Use the Task tool to spawn 5 agents simultaneously. Each receives:
Analyze whether this implementation plan can be executed by parallel subagents.
INVESTIGATE:
1. Which tasks can run in parallel (no dependencies between them)?
2. Which tasks must be sequential (Task B depends on Task A output)?
3. Are there any circular dependencies or blocking issues?
4. What is the critical path?
Return:
- Recommended batch structure for parallel execution
- Maximum concurrent agents
- Any blocking issues that prevent parallelization
Verify TDD discipline in this implementation plan.
CHECK each task for:
1. Tests written BEFORE implementation (RED phase)
2. Step to run test and verify it fails
3. Minimal implementation to make test pass (GREEN phase)
4. Tests focus on behavior, not implementation details
LOOK FOR over-engineering:
- Excessive mocking (testing implementation vs behavior)
- Too many abstraction layers
- Defensive code for impossible scenarios
- Premature optimization
Return: TDD adherence assessment and over-engineering concerns.
Verify types and APIs in the plan match the actual codebase.
SEARCH the codebase for:
1. All types referenced in the plan's code blocks
2. Existing type definitions
3. API endpoint contracts (request/response shapes)
4. Import paths
VERIFY:
1. All properties referenced exist in the types
2. Enum values match between plan and codebase
3. Import paths are correct
4. No type mismatches
Return: List of mismatches with file:line references.
Verify library usage in this plan follows best practices.
For each library referenced:
1. Are function signatures correct for current versions?
2. Are there deprecated APIs being used?
3. Does usage follow library documentation?
4. Are installation commands correct?
Check against loaded skills for technology-specific guidance.
Return: Incorrect API usage with recommendations.
Check for security gaps and missing error handling.
VERIFY:
1. Input validation at system boundaries
2. Error handling in API/DB operations
3. Auth/authz checks where needed
4. Edge cases are handled
Return: Security gaps and missing error handling.
After all agents complete, create consolidated report:
## Plan Review: [Feature Name from plan]
**Plan:** `[path to plan file]`
**Tech Stack:** [Detected technologies]
### Summary Table
| Criterion | Status | Notes |
|-----------|--------|-------|
| Parallelization | ✅ GOOD / ⚠️ ISSUES | [Brief note] |
| TDD Adherence | ✅ GOOD / ⚠️ ISSUES | [Brief note] |
| Type/API Match | ✅ GOOD / ⚠️ ISSUES | [Brief note] |
| Library Practices | ✅ GOOD / ⚠️ ISSUES | [Brief note] |
| Security/Edge Cases | ✅ GOOD / ⚠️ ISSUES | [Brief note] |
### Issues Found
#### Critical (Must Fix Before Execution)
1. [Task N, Step M] ISSUE_CODE
- Issue: What's wrong
- Why: Impact if not fixed
- Fix: Specific change
- Suggested edit:
[replacement content]
#### Major (Should Fix)
2. [Task N] ISSUE_CODE
- Issue: ...
- Why: ...
- Fix: ...
#### Minor (Nice to Have)
3. [Task N] ISSUE_CODE
- Issue: ...
- Fix: ...
### Verdict
**Ready to execute?** Yes | With fixes (1-N) | No
**Reasoning:** [1-2 sentence assessment]
Save review to same directory as plan:
docs/plans/2025-01-15-feature.mddocs/plans/2025-01-15-feature-review.mdReview file header:
# Plan Review: [Feature Name]
> **To apply fixes:** Open new session, run:
> `Read this file, then apply the suggested fixes to [plan path]`
**Reviewed:** [Current date/time]
**Verdict:** [Yes | With fixes (1-N) | No]
---
Prompt user:
---
## Next Steps
**Review saved to:** `[review file path]`
**Options:**
1. **Apply fixes now** - Edit the plan file to address issues
2. **Save & fix later** - Open new session to apply fixes
3. **Proceed anyway** - Execute plan despite issues (not recommended for Critical)
Which option?