Session retrospective — review recent work to discover improvement opportunities through interactive dialogue. Analyzes corrections, undocumented practices, efficiency patterns, tool usage, and workflow gaps. Use when the user wants to reflect on recent work and find ways to improve their AI collaboration workflow. Common moments: end of session, after a PR, after debugging, after a code review, after finishing a major task, or whenever something felt inefficient. Use when asked to "retro", "session retro", "session review", "review this session", "what can I improve", "retrospective", "what went wrong", "how can I be more efficient", or when the user wants to improve their CLAUDE.md, discover useful skills, optimize existing skills, or design new workflows based on recent patterns. Boundary: not for code review, not for PR review (use pr-review-toolkit).
From vp-retronpx claudepluginhub vdustr/vp-claude-code-marketplace --plugin vp-retroThis skill uses the workspace's default tool permissions.
references/dimensions.mdreferences/subagent-guide.mdImplements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Review a Claude Code session to find improvement opportunities. The retro works through interactive dialogue — observe what happened, discuss findings with the user, then surface actionable recommendations.
The goal is improving the user's AI collaboration efficiency: better prompts, better docs, better tools, better workflows.
Let's retro this session
/retro
What could I improve from this session?
Session review — find optimization opportunities
The flow below is typical guidance — adapt naturally to the conversation. Not every session needs every step; a short session with no issues might just need a quick observation and move on.
Review the session conversation and freely identify anything noteworthy. Don't constrain yourself to predefined categories — let observations emerge naturally from what actually happened.
For each observation, provide a one-line finding and an initial actionable recommendation. Even if the user doesn't deep-dive, every observation should offer a useful takeaway.
After the open-ended scan, use the 15 dimensions in dimensions.md as a safety-net checklist — scan for anything the open-ended observation might have missed. Only surface additional findings that are genuinely worth noting.
Include both:
Present observations one at a time with a progress indicator (e.g., [2/6]). For each observation, give the initial recommendation and ask if the user wants to deep-dive.
The user might:
After walking through all observations, if the user selected any for deep-dive, assess which items genuinely need subagent research versus items that are clear enough to act on directly. Present this assessment and ask the user to confirm before spawning subagents.
Each subagent follows the cycle in subagent-guide.md: research the observation thoroughly, analyze root causes, design concrete solutions, and present findings with a recommendation.
Present each subagent's result one at a time with progress. The user can:
After discussing all results, compile confirmed actions into a recommendation summary. For each action, present what to do and why. If the user asks to persist (e.g., "write it down"), output a markdown summary in the chat — do not write files.
Close the retro explicitly: tell the user the retro is complete and that recommended actions are theirs to initiate when ready.