Schema for tracking code review outcomes to enable feedback-driven skill improvement. Use when logging review results or analyzing review quality.
Log code review outcomes to track false positives and enable automated skill improvement. Use when you want to record whether your review feedback was correct, rejected, or deferred.
/plugin marketplace add anderskev/beagle/plugin install anderskev-beagle@anderskev/beagleThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Structured format for logging code review outcomes. This data enables:
date,file,line,rule_source,category,severity,issue,verdict,rationale
| Field | Type | Description | Example Values |
|---|---|---|---|
date | ISO date | When review occurred | 2025-12-23 |
file | path | Relative file path | amelia/agents/developer.py |
line | string | Line number(s) | 128, 190-191 |
rule_source | string | Skill and rule that triggered issue | python-code-review/common-mistakes:unused-variables, pydantic-ai-common-pitfalls:tool-decorator |
category | enum | Issue taxonomy | type-safety, async, error-handling, style, patterns, testing, security |
severity | enum | As flagged by reviewer | critical, major, minor |
issue | string | Brief description | Return type list[Any] loses type safety |
verdict | enum | Human decision | ACCEPT, REJECT, DEFER, ACKNOWLEDGE |
rationale | string | Why verdict was chosen | pydantic-ai docs explicitly support this pattern |
| Verdict | Meaning | Action |
|---|---|---|
ACCEPT | Issue is valid, will fix | Code change made |
REJECT | Issue is invalid/wrong | No change; may improve skill |
DEFER | Valid but not fixing now | Tracked for later |
ACKNOWLEDGE | Valid but intentional | Document why it's intentional |
ACCEPT: The reviewer correctly identified a real issue.
2025-12-27,amelia/agents/developer.py,128,python-code-review:type-safety,type-safety,major,Return type list[Any] loses type safety,ACCEPT,Changed to list[AgentMessage]
REJECT: The reviewer was wrong - the code is correct.
2025-12-23,amelia/drivers/api/openai.py,102,python-code-review:line-length,style,minor,Line too long (104 > 100),REJECT,ruff check passes - no E501 violation exists
DEFER: Valid issue but out of scope for current work.
2025-12-22,api/handlers.py,45,fastapi-code-review:error-handling,error-handling,minor,Missing specific exception type,DEFER,Refactoring planned for Q1
ACKNOWLEDGE: Intentional design decision.
2025-12-21,core/cache.py,89,python-code-review:optimization,patterns,minor,Using dict instead of dataclass,ACKNOWLEDGE,Performance-critical path - intentional
Format: skill-name/section:rule-id or skill-name:rule-id
Examples:
python-code-review/common-mistakes:unused-variablespydantic-ai-common-pitfalls:tool-decoratorfastapi-code-review:dependency-injectionpytest-code-review:fixture-scopeUse the skill folder name and identify the specific rule or section that triggered the issue.
| Category | Description | Examples |
|---|---|---|
type-safety | Type annotation issues | Missing types, incorrect types, Any usage |
async | Async/await issues | Blocking in async, missing await |
error-handling | Exception handling | Bare except, missing error handling |
style | Code style/formatting | Line length, naming conventions |
patterns | Design patterns | Anti-patterns, framework misuse |
testing | Test quality | Missing coverage, flaky tests |
security | Security issues | Injection, secrets exposure |
Explain what you fixed:
Explain why the issue is invalid:
Explain when/why it will be addressed:
Explain why it's intentional:
date,file,line,rule_source,category,severity,issue,verdict,rationale
2025-12-20,tests/integration/test_cli_flows.py,407,pytest-code-review:parametrization,testing,minor,Unused extra_args parameter in parametrization,ACCEPT,Fixed - removed dead parameter
2025-12-20,tests/integration/test_cli_flows.py,237-242,pytest-code-review:coverage,testing,major,Missing review --local in git repo error test,REJECT,Not applicable - review uses different error path
2025-12-21,amelia/server/orchestrator/service.py,1702,python-code-review:immutability,patterns,critical,Direct mutation of frozen ExecutionState,ACCEPT,Fixed using model_copy(update={...})
2025-12-23,amelia/drivers/api/tools.py,48-53,pydantic-ai-common-pitfalls:tool-decorator,patterns,major,Misleading RunContext pattern - should use decorators,REJECT,pydantic-ai docs explicitly support passing raw functions with RunContext to Agent(tools=[])
2025-12-23,amelia/drivers/api/openai.py,102,python-code-review:line-length,style,minor,Line too long (104 > 100),REJECT,ruff check passes - no E501 violation exists
2025-12-27,amelia/core/orchestrator.py,190-191,python-code-review:exception-handling,error-handling,major,Generic exception handling in get_code_changes_for_review,ACCEPT,Changed Exception to (FileNotFoundError OSError)
2025-12-27,amelia/agents/developer.py,128,python-code-review:type-safety,type-safety,major,Return type list[Any] loses type safety,ACCEPT,Changed to list[AgentMessage] and removed unused Any import
See review-skill-improver skill for the full analysis workflow.
| Pattern | Skill Improvement |
|---|---|
| "linter passes" rejections | Add linter verification step before flagging style issues |
| "docs support this" rejections | Add exception for documented framework patterns |
| "intentional" rejections | Add codebase context check before flagging |
| "wrong code path" rejections | Add code tracing step before claiming gaps |
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API). Use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.