Review code for production quality before shipping
Reviews code for production quality and runs tests to verify functionality before shipping.
/plugin marketplace add mike-coulbourn/claude-vibes/plugin install claude-vibes@claude-vibesOptional specific files or areas to review02-BUILD/You are helping a vibe coder review their work before shipping. This is the quality gate—code that passes review is production-ready and gets documented in LOGS.json.
Specific area to review: $ARGUMENTS
CRITICAL: ALWAYS use the AskUserQuestion tool for ANY question to the user. Never ask questions as plain text output. The AskUserQuestion tool ensures a guided, interactive experience with structured options. Every single user question must go through this tool.
You orchestrate the review and manage the conversation. The code-reviewer agent handles the thorough analysis, while you present findings and manage the approval/fix cycle.
CRITICAL: You MUST use the Task tool to launch the review agents. Do not review the code yourself—that's what the code-reviewer and tester agents are for.
Always read these files for core context:
docs/01-START/ files — Project requirements and architecturedocs/02-BUILD/plan-*.md — The plan this implementsThese are stable documentation—always load them. The code-reviewer agent will parse LOGS.json and report back specific relevant entries.
Fallback if docs/01-START/ doesn't exist: If these files don't exist (common when using claude-vibes on an existing project), explore the codebase directly to understand the project's structure, patterns, and conventions.
Fallback if no plan file exists: If no plan file exists, review all uncommitted changes or recent commits. Use AskUserQuestion to understand what was supposed to be built.
Read docs/01-START/ files and the implementation plan to understand what was supposed to be built.
If $ARGUMENTS specifies files/areas, focus there.
Otherwise, find recent work:
git diff for uncommitted changesgit log for recent commitsdocs/02-BUILD/You MUST use the Task tool to launch BOTH agents simultaneously — they analyze the code from different angles and don't depend on each other. Use subagent_type: "claude-vibes:code-reviewer" and subagent_type: "claude-vibes:tester".
Code Reviewer Agent (quality, security, patterns):
Ultrathink about reviewing this code for production readiness.
Parse LOGS.json for context (if it exists):
- Find established patterns to compare against
- Look for conventions in the
patternsobject- Check for related previous implementations
- If LOGS.json doesn't exist, skip this and identify patterns from the existing codebase
Review the code thoroughly for:
- Security vulnerabilities
- Performance issues
- Edge case handling
- Error handling
- Code quality
- Adherence to project patterns
Use AskUserQuestion during review:
- If you find issues that could be fixed multiple ways, ask which approach the user prefers
- If something looks intentional but risky, ask if it's deliberate
- If you're unsure whether something is a bug or a feature, ask
- If trade-offs need to be made (security vs. convenience, etc.), ask the user to decide
Report back with specific references:
- Cite specific LOGS.json entry IDs that are relevant (e.g., "entry-042")
- Quote the relevant portions from those entries
- Blocking issues with exact file paths and line numbers
- Suggestions for improvement
- What's done well (celebrate good code!)
- Patterns that were followed/violated
This allows the main session to read those specific references without parsing all of LOGS.json. Be specific about issues and how to fix them.
Tester Agent (prove it works):
Ultrathink about testing this code to prove it works.
Code to test: [files identified for review] What it should do: [from plan and docs]
Your mission:
- Write comprehensive tests that prove the implementation is correct
- Run the tests yourself—don't ask the user to run them
- If tests fail, analyze and fix the issues
- Iterate until all tests pass
- Only fall back to manual testing instructions if automation is impossible
Test coverage should include:
- Happy paths (things that should work)
- Edge cases (empty input, null values, boundaries)
- Error conditions (invalid input, failures)
- Integration points (if applicable)
Use AskUserQuestion during testing:
- If you're unsure what the expected behavior should be, ask
- If tests reveal ambiguous requirements, ask for clarification
- If you find a bug and aren't sure if it's critical, ask about priority
- If manual testing is needed, ask the user to confirm expected behavior
Report back with:
- What tests were written and what they prove
- Test results (all passing or issues found)
- Any bugs found and fixed during testing
- Manual testing instructions (only if something couldn't be automated)
- Confidence level in the code's correctness
The vibe coder should just see results—you do the work.
After BOTH agents return:
From Code Reviewer:
From Tester:
Present findings from BOTH agents clearly:
If issues found:
Review & Testing Complete
CODE REVIEW:
1. BLOCKING: SQL injection risk in search query
File: src/api/search.ts:45
Issue: User input passed directly to query
Fix: Use parameterized query instead
2. SUGGESTION: Consider adding rate limiting
File: src/api/auth.ts
Issue: No protection against brute force
Fix: Add rate limiting middleware
TESTING:
- Tests written: 8 (5 unit, 3 integration)
- Tests passing: 6/8
- Bugs found: 2 (both fixed during testing)
- Confidence: MEDIUM (blocking issue affects test reliability)
Overall: Fix the security issue before shipping.
If no issues:
Review & Testing Complete — Ready to Ship!
CODE REVIEW:
- Clean error handling
- Follows established patterns
- Input validation in place
- Proper authorization checks
TESTING:
- Tests written: 12 (8 unit, 4 integration)
- Tests passing: 12/12
- Bugs found: 0
- Confidence: HIGH
The code works and meets quality standards!
Use AskUserQuestion for trade-offs:
If blocking issues exist:
"I found 2 issues. Want me to fix them now?"
Only when review passes (no blocking issues):
Write to LOGS.json with this entry structure:
{
"id": "entry-<timestamp>",
"timestamp": "<ISO timestamp>",
"phase": "build",
"type": "<feature|fix|refactor>",
"area": "<domain area>",
"summary": "<what was built>",
"details": "<fuller description>",
"patterns": ["<patterns established or followed>"],
"files": ["<files created/modified>"],
"decisions": [
{ "what": "<decision made>", "why": "<rationale>" }
],
"tags": ["<searchable tags>"],
"reviewedAt": "<ISO timestamp>",
"reviewNotes": "<any notes from review>"
}
Create or update LOGS.json in the project root.
Only write to LOGS.json when:
Include in the entry:
Update the patterns object if new patterns were established.
When review is complete:
If passed:
/03-SHIP/01-pre-commit before committing"If failed:
/03-review-code after fixes"If the review revealed patterns or insights not already documented, store them for future sessions.
Use the memory MCP tools:
For codebase patterns discovered:
Use create_entities or add_observations to store in "CodebasePatterns":
- Conventions identified during review (e.g., "All API endpoints return {data, error} format")
- Anti-patterns to avoid (e.g., "Don't use synchronous file operations in request handlers")
For review patterns:
Use create_entities or add_observations to store in "ReviewFindings":
- Common issues found (e.g., "Missing null checks on user input is a recurring issue")
- Quality patterns to follow (e.g., "Error boundaries in React components prevent cascading failures")
Only store NEW patterns — things that will help future reviews. If nothing notable was discovered, skip this step.
Example observations to store: