From avila-tek-skill-pack
Conducts multi-axis code review. Use before merging any change. Use when reviewing code written by yourself, another agent, or a human. Use when you need to assess code quality across multiple dimensions before it enters the main branch.
npx claudepluginhub avila-tek/avila-tek-skill-packThis skill uses the workspace's default tool permissions.
Detect the active stack from the project's package files. State it explicitly: "Active stack: {name}".
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Detect the active stack from the project's package files. State it explicitly: "Active stack: {name}".
| Stack | Detection signal |
|---|---|
| NestJS | @nestjs/core in package.json |
| Next.js | next in package.json (not Angular, not React Native) |
| Go | go.mod present |
| Spring Boot | pom.xml or build.gradle containing spring-boot |
| React Native | react-native in package.json |
| Flutter | pubspec.yaml containing flutter: |
Required before beginning review — do not skip:
references/nestjs.mdreferences/nextjs.mdreferences/go.mdreferences/spring-boot.mdreferences/react-native.mdreferences/flutter.mdBefore reviewing: check the Red Flags section of the loaded reference. If any hit, flag as a blocking finding. Run the full Verification Checklist when the review is complete.
Multi-dimensional code review with quality gates. Every change gets reviewed before merge — no exceptions. Review covers five axes: correctness, readability, architecture, security, and performance.
The approval standard: Approve a change when it definitely improves overall code health, even if it isn't perfect. Perfect code doesn't exist — the goal is continuous improvement. Don't block a change because it isn't exactly how you would have written it. If it improves the codebase and follows the project's conventions, approve it.
Every review evaluates code across these dimensions:
Does the code do what it claims to do?
docs/epics/E-XXX_slug/stories/E-XXX_S-YYY_slug/E-XXX_S-YYY_slug.md), verify that every Section 2 Acceptance Criterion (AC-01, AC-02, …) is covered by tests and implementation before approving.Can another engineer (or agent) understand this code without the author explaining it?
temp, data, result without context)_unused), backwards-compat shims, or // removed comments?Does the change fit the system's design?
For detailed security guidance, see security-and-hardening. Does the change introduce vulnerabilities?
For detailed profiling and optimization, see performance-optimization. Does the change introduce performance problems?
Does the change follow the project's specific stack standards?
Check the loaded reference file (from the Stack Activation Gate) for:
For monorepos with multiple stacks (NestJS + Next.js): apply each stack's reference to its respective files.
Small, focused changes are easier to review, faster to merge, and safer to deploy. Target these sizes:
~100 lines changed → Good. Reviewable in one sitting.
~300 lines changed → Acceptable if it's a single logical change.
~1000 lines changed → Too large. Split it.
What counts as "one change": A single self-contained modification that addresses one thing, includes related tests, and keeps the system functional after submission. One part of a feature — not the whole feature.
Splitting strategies when a change is too large:
| Strategy | How | When |
|---|---|---|
| Stack | Submit a small change, start the next one based on it | Sequential dependencies |
| By file group | Separate changes for groups needing different reviewers | Cross-cutting concerns |
| Horizontal | Create shared code/stubs first, then consumers | Layered architecture |
| Vertical | Break into smaller full-stack slices of the feature | Feature work |
When large changes are acceptable: Complete file deletions and automated refactoring where the reviewer only needs to verify intent, not every line.
Separate refactoring from feature work. A change that refactors existing code and adds new behavior is two changes — submit them separately. Small cleanups (variable renaming) can be included at reviewer discretion.
Every change needs a description that stands alone in version control history.
First line: Short, imperative, standalone. "Delete the FizzBuzz RPC" not "Deleting the FizzBuzz RPC." Must be informative enough that someone searching history can understand the change without reading the diff.
Body: What is changing and why. Include context, decisions, and reasoning not visible in the code itself. Link to bug numbers, benchmark results, or design docs where relevant. Acknowledge approach shortcomings when they exist.
Anti-patterns: "Fix bug," "Fix build," "Add patch," "Moving code from A to B," "Phase 1," "Add convenience functions."
Before looking at code, understand the intent:
- What is this change trying to accomplish?
- What spec or task does it implement?
- What is the expected behavior change?
Tests reveal intent and coverage:
- Do tests exist for the change?
- Do they test behavior (not implementation details)?
- Are edge cases covered?
- Do tests have descriptive names?
- Would the tests catch a regression if the code changed?
Walk through the code with the five axes in mind:
For each file changed:
1. Correctness: Does this code do what the test says it should?
2. Readability: Can I understand this without help?
3. Architecture: Does this fit the system?
4. Security: Any vulnerabilities?
5. Performance: Any bottlenecks?
Label every comment with its severity so the author knows what's required vs optional:
| Prefix | Meaning | Author Action |
|---|---|---|
| (no prefix) | Required change | Must address before merge |
| Critical: | Blocks merge | Security vulnerability, data loss, broken functionality |
| Nit: | Minor, optional | Author may ignore — formatting, style preferences |
| Optional: / Consider: | Suggestion | Worth considering but not required |
| FYI | Informational only | No action needed — context for future reference |
This prevents authors from treating all feedback as mandatory and wasting time on optional suggestions.
Check the author's verification story:
- What tests were run?
- Did the build pass?
- Was the change tested manually?
- Are there screenshots for UI changes?
- Is there a before/after comparison?
After the review is complete, write summary.md to the active feature or story folder. This is always the last step — do not skip it.
Location:
docs/epics/E-XXX_slug/stories/E-XXX_S-YYY_slug/summary.mddocs/features/<feature>/summary.mdFormat:
# Review Summary — [Feature or Story name]
**Date:** YYYY-MM-DD
**Scope:** [Commit range, PR title, or "recent session"]
## Findings
| Axis | Severity | Location | Issue | Resolution |
|------|----------|----------|-------|------------|
| Correctness | Critical | src/foo.ts:42 | Missing null check on user input | Fixed in commit abc123 |
| Architecture | Important | src/bar.ts:15 | Controller imports repository directly | Deferred — tracking TASK-421 |
| Readability | Suggestion | src/baz.ts:8 | Generic name `data` — rename to `userProfile` | Fixed |
## Recurring Patterns
Patterns that appeared more than once during this session. Watch for these going forward:
- [pattern description — what it is and why it matters]
## Positive Patterns
Good practices observed — keep doing these:
- [practice description]
## Follow-Up Actions
- [ ] [Specific action, owner if known]
Rules for the summary:
Use different models for different review perspectives:
Model A writes the code
│
▼
Model B reviews for correctness and architecture
│
▼
Model A addresses the feedback
│
▼
Human makes the final call
This catches issues that a single model might miss — different models have different blind spots.
Example prompt for a review agent:
Review this code change for correctness, security, and adherence to
our project conventions. The spec says [X]. The change should [Y].
Flag any issues as Critical, Important, or Suggestion.
After any refactoring or implementation change, check for orphaned code:
Don't leave dead code lying around — it confuses future readers and agents. But don't silently delete things you're not sure about. When in doubt, ask.
DEAD CODE IDENTIFIED:
- formatLegacyDate() in src/utils/date.ts — replaced by formatDate()
- OldTaskCard component in src/components/ — replaced by TaskCard
- LEGACY_API_URL constant in src/config.ts — no remaining references
→ Safe to remove these?
Slow reviews block entire teams. The cost of context-switching to review is less than the waiting cost imposed on others.
When resolving review disputes, apply this hierarchy:
Don't accept "I'll clean it up later." Experience shows deferred cleanup rarely happens. Require cleanup before submission unless it's a genuine emergency. If surrounding issues can't be addressed in this change, require filing a bug with self-assignment.
When reviewing code — whether written by you, another agent, or a human:
Part of code review is dependency review:
Before adding any dependency:
npm audit)Rule: Prefer standard library and existing utilities over new dependencies. Every dependency is a liability.
## Review: [PR/Change title]
### Context
- [ ] I understand what this change does and why
### Correctness
- [ ] Change matches spec/task requirements
- [ ] Edge cases handled
- [ ] Error paths handled
- [ ] Tests cover the change adequately
### Readability
- [ ] Names are clear and consistent
- [ ] Logic is straightforward
- [ ] No unnecessary complexity
### Architecture
- [ ] Follows existing patterns
- [ ] No unnecessary coupling or dependencies
- [ ] Appropriate abstraction level
### Security
- [ ] No secrets in code
- [ ] Input validated at boundaries
- [ ] No injection vulnerabilities
- [ ] Auth checks in place
- [ ] External data sources treated as untrusted
### Performance
- [ ] No N+1 patterns
- [ ] No unbounded operations
- [ ] Pagination on list endpoints
### Stack Conventions
- [ ] No Red Flags from active STACK.md present in the change
- [ ] Key Patterns from active STACK.md followed
- [ ] STACK.md Verification Checklist passes
### Verification
- [ ] Tests pass
- [ ] Build succeeds
- [ ] Manual verification done (if applicable)
### Verdict
- [ ] **Approve** — Ready to merge
- [ ] **Request changes** — Issues must be addressed
../../references/security-checklist.md../../references/performance-checklist.mdAfter review is complete:
When the review passes, suggest to the user:
"Review passed. When you're ready, run
/shipto launch (dev-shipping-and-launch)."
Do not invoke /ship automatically.