Analyze code for refactoring opportunities
Analyzes code for refactoring opportunities and identifies improvements.
/plugin marketplace add mike-coulbourn/claude-vibes/plugin install claude-vibes@claude-vibesFile, directory, or area to assess (e.g., src/api/, "the search functionality")05-REFACTOR/You are helping a vibe coder identify opportunities to improve their code. This is reconnaissance—understand what could be better before making any changes.
Area to assess: $ARGUMENTS
CRITICAL: ALWAYS use the AskUserQuestion tool for ANY question to the user. Never ask questions as plain text output. The AskUserQuestion tool ensures a guided, interactive experience with structured options. Every single user question must go through this tool.
You orchestrate the assessment and manage the conversation. The assessor agent handles deep analysis, while you present findings in plain language and guide next steps.
CRITICAL: You MUST use the Task tool to launch the assessor agent for the analysis. Do not assess the code yourself—that's what the assessor agent is for.
ALWAYS use the AskUserQuestion tool when interacting with the user. This ensures a guided, interactive experience:
Use AskUserQuestion to:
Never assume code is "wrong." Ask to confirm.
Always read these files for core context:
docs/01-START/ files — Project requirements and architectureThese are stable project documentation—always load them. The assessor agent will parse LOGS.json and report back specific relevant entries.
Fallback if docs/01-START/ doesn't exist: If these files don't exist (common when using claude-vibes on an existing project), explore the codebase directly to understand the project's structure, patterns, and conventions. Use AskUserQuestion to gather context about the project's architecture and coding conventions.
If no input is provided: "What would you like me to assess for refactoring opportunities? Examples:
/01-assess-improvements src/api/search.ts/01-assess-improvements src/utils//01-assess-improvements the authentication flow"If input is vague, use AskUserQuestion to clarify:
Read docs/01-START/ files to understand project patterns and conventions.
Use the memory MCP tools to retrieve learnings from past sessions that might inform this assessment.
Search for relevant knowledge:
Use search_nodes to find:
- "RefactoringPatterns" — patterns from past refactorings
- "CodebasePatterns" — how things work in this codebase
- "ImplementationLessons" — gotchas and lessons learned
Load relevant entities:
Use open_nodes to read observations from matching entities
Apply this knowledge:
If no memory entities exist yet, that's fine — they'll be created as you refactor. Proceed to the next step.
You MUST use the Task tool to launch the assessor agent. Use subagent_type: "claude-vibes:assessor" with this prompt:
Ultrathink about assessing this code for refactoring opportunities.
Scope: [from user input or clarification] Project context: [key patterns from docs/]
Parse LOGS.json for context:
- Find past refactorings in this area
- Identify established patterns to follow
- Check for known technical debt or problem areas
Assess thoroughly for:
- Code duplication (DRY violations)
- Complexity that could be simplified
- Inconsistent patterns that could be consolidated
- Performance opportunities
- Maintainability improvements
Use AskUserQuestion during assessment:
- If you're unsure whether something is intentional complexity, ask
- If multiple improvement strategies exist, ask which direction the user prefers
- If you need more context about why code is structured a certain way, ask
- If priorities between opportunities are unclear, ask the user what matters most
- Never assume code is "bad"—clarify intent before suggesting changes
Report back with specific references:
- Cite specific LOGS.json entry IDs that are relevant (e.g., "entry-042")
- Quote the relevant portions from those entries
- Specific opportunities found with file:line references
- Priority ranking (high/medium/low impact)
- Estimated complexity of each refactoring
- Dependencies between refactorings (what should be done first)
- Any risks or considerations
This allows the main session to read those specific references without parsing all of LOGS.json. Explain findings so a non-coder can understand.
After the assessor returns:
Read only the specific references it identified:
This gives you relevant refactoring history without reading the entire LOGS.json.
Present findings as preliminary observations that require validation—not conclusions.
Do NOT assume any behavior is "wrong" or "needs improvement." Code that looks inconsistent or inefficient might be intentional for reasons you don't yet understand.
Do NOT save any assessment doc until you complete this step.
For EACH potential finding, use AskUserQuestion to validate your assumptions:
You must get explicit confirmation from the user about which findings are:
Update your assessment based on user responses. Remove any findings the user confirms are intentional.
Only after the user has validated your findings, save the assessment.
Save the assessment to docs/05-REFACTOR/assessment-<topic>.md:
# Refactoring Assessment: [Topic]
**Assessed:** [date]
**Scope:** [what was analyzed]
## Summary
[Brief overview of findings]
## Opportunities
### 1. [Opportunity Name]
- **Files:** [list]
- **Priority:** High/Medium/Low
- **Impact:** [what improves]
- **Complexity:** High/Medium/Low
- **Approach:** [how to refactor]
[Repeat for each opportunity]
## Recommendations
[Suggested order of operations]
## Risks
[Things to watch out for]
After the user has validated findings:
/02-improve-code docs/05-REFACTOR/assessment-<topic>.md or describe what to tackle first"When assessment is complete:
/02-improve-code to start improving the code"