From codebrain
Use after shipping a feature to run a structured retrospective. Gathers metrics (what shipped, what was cut, time spent), identifies what went well and what didn't, updates the PRD with learnings, and plans the next iteration. Prevents 'ship once and forget' — the root cause of MVP hell.
npx claudepluginhub chrsmay/codebrain-plugin --plugin codebrainThis skill uses the workspace's default tool permissions.
Post-launch retrospective and iteration planning. Closes the loop from shipped feature back to product learning.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Post-launch retrospective and iteration planning. Closes the loop from shipped feature back to product learning.
/codebrain:retro [epic-slug]
MVP hell happens when you ship once and never iterate. Retros force you to: measure what you shipped against what you planned, learn from what went wrong, and plan the NEXT version. Without this, your app accumulates features that are "good enough" but never great.
Read the epic artifacts:
.codebrain/epics/{slug}/prd.md) — what was planned.codebrain/epics/{slug}/journeys.md) — what paths were identifiedCheck git history:
git log --oneline --since="[epic start date]" — what was committedCheck Linear (if available):
## Metrics
### Scope
| Category | Planned | Shipped | Cut | Deferred |
|----------|---------|---------|-----|----------|
| P0 Requirements | N | N | 0 | N |
| P1 Requirements | N | N | N | N |
| P2 Requirements | N | N | N | N |
| Edge Cases Handled | N | N | N | N |
### Time
- **Appetite (planned):** [from discovery — e.g., "2 weeks"]
- **Actual time:** [first commit to launch]
- **Over/Under:** [+N days or -N days]
- **Biggest time sink:** [what took longer than expected]
### Quality
- **Verification cycles:** [how many fix-and-verify loops]
- **Critical issues found:** [count]
- **Spec deviations:** [count]
- **Post-launch bugs:** [count, from Linear or support]
Ask the user these questions ONE AT A TIME:
# Retrospective: [Feature Name]
**Epic:** [slug]
**Date:** [today]
**Duration:** [start to launch]
## Summary
[2-3 sentences: what was built, how it went, key takeaway]
## Metrics
[from Step 2]
## What Went Well
- [item 1]
- [item 2]
## What Didn't Go Well
- [item 1 — with root cause if identifiable]
- [item 2 — with root cause if identifiable]
## Surprises
- [unexpected discovery]
## Process Improvements
[Changes to apply to FUTURE features, not just this one]
- [ ] [Improvement 1 — e.g., "Write journey map earlier, before tech spec"]
- [ ] [Improvement 2 — e.g., "Set up monitoring before launch, not after"]
## PRD Update Recommendations
[What should change in the PRD based on reality]
- **REQ-001:** [still valid / needs modification / should be removed]
- **REQ-010:** [promoted to P0 based on user feedback]
- **New requirement:** [discovered during implementation]
## Next Iteration Plan
### Now (this week)
- [ ] [Bug fix or urgent improvement]
### Next (next 2 weeks)
- [ ] [P1 requirement that was deferred]
- [ ] [Edge case that was accepted but should be handled]
### Later (backlog)
- [ ] [P2 requirement]
- [ ] [Nice-to-have discovered during implementation]
## Constitution Updates
[Any new principles learned — add to constitution if applicable]
- [e.g., "Always set up error tracking before launch" → add to constitution]
.codebrain/memory/continuity.md — what was shipped and what's next.codebrain/memory/patterns.md — any new patterns or anti-patterns discovered.codebrain/memory/decisions.md — decisions validated or invalidated by reality.codebrain/memory/known-issues.md — bugs or debt from this feature.codebrain/memory/constitution.md.codebrain/epics/{slug}/retro.md via MCP toolsRead .codebrain/config.json for linearSync and linearProjectId.
If Linear sync is active:
Create a Linear project update (retrospective summary):
onTrack if scope metrics ≥ 80% shipped, atRisk if 50-80%, offTrack if < 50%## Retrospective — [date]
### Metrics
| Metric | Value |
|--------|-------|
| Planned scope | [N tickets] |
| Shipped | [N tickets] |
| Cut | [N tickets] |
| Deferred | [N tickets] |
| Appetite | [X hours] |
| Actual | [Y hours] |
### What Went Well
[key wins]
### What Didn't
[key issues]
### Process Improvements
[changes for future features]
Gather metrics from Linear (not just local files):
list_issues for the project to count: Done, Cancelled, BacklogCreate "Next Iteration" issues:
create_issue with priority 1 (Urgent), label next-iterationcreate_issue with priority 2 (High), label next-iterationcreate_issue with priority 4 (Low), label backlogClose the current project (or mark as completed):
update_project to set status to "Completed"Store the retro as a project document:
After retro is complete:
/codebrain:plan or /codebrain:debug to address them/codebrain:prd to update the PRD, then /codebrain:epic create for the next cycle