From beagle-python
Enforces pre-report verification checklist for code reviews, confirming usages, context, and framework patterns to prevent false positives on unused code, validation, types, and leaks.
npx claudepluginhub existential-birds/beagle --plugin beagle-pythonThis skill uses the workspace's default tool permissions.
This protocol MUST be followed before reporting any code review finding. Skipping these steps leads to false positives that waste developer time and erode trust in reviews.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
This protocol MUST be followed before reporting any code review finding. Skipping these steps leads to false positives that waste developer time and erode trust in reviews.
Before flagging ANY issue, verify:
Before flagging, you MUST:
Common false positives:
Before flagging, you MUST:
Common false positives:
Before flagging, you MUST:
Valid patterns often flagged incorrectly:
# Type annotation, NOT cast
data: UserData = await load_user()
# Type narrowing with isinstance
if isinstance(data, User):
data.name # Mypy knows this is User
Before flagging, you MUST:
Common false positives:
Before flagging, you MUST:
Do NOT flag:
ONLY use for:
Use for:
Use for:
Use for:
These are NOT review blockers. They should be noted for the author's awareness but must not appear in the actionable issue count. The Verdict should ignore informational items entirely.
| Pattern | Why It's Valid |
|---|---|
dict.get(key, []) | Returns default for missing keys, not error suppression |
Optional[T] return type | Standard way to express nullable in Python typing |
assert in test code | pytest uses assertions, not try/except |
| Type annotation on variable | Not a cast, just a hint for type checkers |
typing.cast() with prior validation | Valid after runtime check confirms type |
| Pattern | Why It's Valid |
|---|---|
Depends() without explicit type | FastAPI infers dependency type from function signature |
async def endpoint without await | May use sync DB calls or simple returns |
| Response model different from DB model | Separation of concerns between API and persistence |
BackgroundTasks parameter | Valid for fire-and-forget operations |
Direct request.state access | Standard pattern for middleware-injected data |
| Pattern | Why It's Valid |
|---|---|
assert without message | pytest rewrites assertions to show detailed diffs |
@pytest.fixture without explicit scope | Default function scope is correct for most fixtures |
monkeypatch over unittest.mock | Simpler API, pytest-native |
| Fixture returning mutable state | Each test gets fresh fixture invocation by default |
| Pattern | Why It's Valid |
|---|---|
+? lazy quantifier in regex | Prevents over-matching, correct for many patterns |
| Direct string concatenation | Simpler than template literals for simple cases |
| Multiple returns in function | Can improve readability |
| Comments explaining "why" | Better than no comments |
Flag missing type annotation ONLY IF ALL of these are true:
_)x = 5 is clearly int)Flag bare except ONLY IF:
Flag missing try/except ONLY IF:
Final verification:
[FILE:LINE] ISSUE_TITLEIf uncertain about any finding, either: