From solopreneur
Reviews conversation history for corrections, traces errors to skills, CLAUDE.md, memory or tools, and proposes process fixes. Invoke after corrections or at session end.
npx claudepluginhub hanamizuki/solopreneur --plugin neo4j-devThis skill uses the workspace's default tool permissions.
A session retrospective that turns conversation mistakes into durable process improvements,
Reviews completed coding sessions to extract actionable improvements: DX friction, documentation gaps, architecture issues, anti-patterns, bug prevention, and tooling updates.
Analyzes current Claude Code session for agent efficiency (tool precision, autonomy) and quality (CLAUDE.md compliance, code patterns), scoring dimensions and surfacing 2-3 actionable improvements.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Share bugs, ideas, or general feedback.
A session retrospective that turns conversation mistakes into durable process improvements, and successful patterns into reusable skills.
Without retro, the same mistakes repeat across sessions — Claude gets corrected, apologizes, and forgets by next conversation. Retro breaks this cycle by tracing errors to their source (a skill, CLAUDE.md, memory, or tool behavior) and proposing concrete fixes that persist.
1. Scan conversation for correction signals
2. Classify: errors found → Path A | smooth session → Path B
3. For each finding: analyze → attribute → propose action
4. Run Path C (token efficiency analysis)
5. Present report with action proposals
6. Ask user which actions to execute
7. Ask user if they want to save the report
Search the conversation for moments where the user corrected or redirected Claude's behavior.
Correction signal patterns (not exhaustive — use judgment):
Also scan for silent corrections — places where the user quietly fixed something Claude should have done (e.g., user manually ran a command Claude skipped).
For each correction found, build a root cause analysis:
State factually: what did Claude do, and what should it have done? No defensiveness, no hedging. Just the facts.
Identify which document or mechanism should have guided the correct behavior:
| Source type | How to check |
|---|---|
| Skill | Was a skill invoked? Read the skill — does it clearly cover this case? |
| CLAUDE.md | Does project or global CLAUDE.md have instructions for this? |
| Memory | Is there a relevant memory file that should have applied? |
| Tool behavior | Was a tool used incorrectly, or was the wrong tool chosen? |
| No source | No existing document covers this case |
Read the actual source file to verify — don't rely on memory of what it says.
Determine which category the error falls into:
Source is unclear — The document exists but is ambiguous or poorly structured, making it easy to misinterpret. (Example: a default rule buried in an "override" subsection)
Source is missing — No document covers this scenario. Claude had to improvise and guessed wrong.
Execution drift — The document is clear, but Claude didn't follow it. This happens when instructions are long or when shortcuts seem reasonable in context.
One-off mistake — A simple slip (wrong file path, typo, etc.) with no systemic cause.
Based on the attribution:
| Attribution | Proposed action |
|---|---|
| Source unclear | Update the source — rewrite the ambiguous section with clearer structure |
| Source missing | Create source — new skill, memory, or CLAUDE.md section |
| Execution drift | Save feedback memory — a concise rule that catches attention on future reads |
| One-off mistake | No action needed — note it but don't over-engineer a fix |
For each action, specify:
When the session went smoothly, look for patterns worth preserving:
For each pattern, assess:
Only propose extraction for patterns scoring high on at least 2 of 3 criteria.
Runs after Path A/B. The goal is to find token waste within this session and suggest concrete improvements for future sessions.
Claude Code cannot access exact token counts, so use heuristic estimation based on observable signals: file sizes read, tool call count, subagent dispatches, and conversation round-trips. Focus on actionable patterns, not precise numbers.
Review the conversation for these patterns:
| Signal | What to look for |
|---|---|
| Subagent model mismatch | Opus agent dispatched for tasks a cheaper model could handle (simple grep, file lookup, straightforward code generation) |
| Redundant operations | Same file read multiple times, identical or near-identical searches repeated |
| Oversized reads | Large files (>500 lines) read in full when only a small section was needed |
| Agent overuse | Subagent spawned for tasks achievable with direct Grep/Glob/Read (see 4c for skill-required dispatches) |
| Sequential round-trips | Multiple dependent tool calls that could have been parallelized, or information gathered piecemeal that could have been collected in one pass |
| Wasted exploration | Dead-end research paths that could have been avoided with a better starting question |
For each cost signal found:
high (different model tier or eliminated agent), medium (fewer round-trips or targeted reads), low (minor optimization)If no subagent dispatches occurred in this session, skip this section.
Review each subagent dispatch in the session:
Explore for a task that only needed Grep)model set in their agent definition frontmatter, not just via the dispatch model
parameter. If the agent definition pins a model, use that as the actual tier for
cost assessment. When a skill required dispatching that specific agent type and the
agent definition pins the model: do not blame the caller — but DO evaluate whether
the skill or agent definition itself is over-specified. If so, propose updating the
skill or agent definition as an actionable finding (e.g., "agent X pins opus but
its tasks only need sonnet-level reasoning — consider changing the agent definition").Present the report directly in conversation (not as a file):
# Retro: YYYY-MM-DD
## Findings
### 1. [Short title of error or pattern]
- **What happened**: [factual description]
- **Root cause**: [source file path + specific section, or "no source"]
- **Attribution**: source unclear / source missing / execution drift / one-off
- **Proposed action**: [action type] — [brief description]
- **Details**: [what specifically to change]
### 2. [Next finding...]
...
## Summary
- Errors: N (action needed: N, no action: N)
- Successful patterns: N (worth extracting: N)
## Proposed Actions
1. [Action type] [target file] — [one-line description]
2. ...
## Token Efficiency (omit if Path C was skipped or no cost signals found)
### Cost Signals
- Subagents dispatched: N (by type: Explore: N, ios-dev: N, ...)
- Large file reads (>500 lines): N
- Redundant operations: N
- Avoidable round-trips: N
### Optimization Opportunities
1. [Specific suggestion] — estimated saving: high/medium/low
- **Proposed action**: [save feedback memory / update skill / update agent definition / update CLAUDE.md / no action]
- **Target**: [file path or "feedback memory: ..."]
2. ...
### Subagent Dispatch Issues (if any)
- [Agent description] used [type/model] → [recommend alternative] because [reason]
Ask which actions to execute. Combine actions from both "Proposed Actions" (Path A/B) and "Optimization Opportunities" (Path C) into a single numbered list for user approval. The user may approve all, some, or none. Only execute approved actions.
Execute approved actions. For each:
Ask if user wants to save the report.
If yes, save to docs/retro/YYYY-MM-DD.md (create the directory if needed).
If no, the report lives only in conversation history.
Very short session (< 5 exchanges, where one exchange = one user message + one assistant response): Tell the user there's not enough context for a meaningful retro. Offer to note any specific concern instead.
User was wrong, not Claude: If investigation reveals the user's correction was based on a misunderstanding, say so respectfully with evidence. Don't create process fixes for non-problems.
Multiple errors with same root cause: Group them under one finding. One fix should address all instances.
Sensitive corrections: If the user's correction was about tone or communication style (not technical), save as a feedback memory rather than a skill update.