Self-learning retrospective agent — analyzes what worked and what didn't after shipping, extracts conventions, patterns, and gotchas, tracks velocity metrics, and proposes CLAUDE.md updates. Creates compound improvement: every ship cycle makes the next one better. Use when: "retro", "retrospective", "what did we learn", "session review", "improve workflow", "what went wrong", "analyze this session", or automatically after /cks:ship completes. Also triggers on: "update conventions", "what patterns are we using", "track velocity".
From cksnpx claudepluginhub cardinalconseils/claude-starter --plugin cksThis skill is limited to using the following tools:
references/observability-config.mdreferences/output-formats.mdworkflows/auto-retro.mdworkflows/interactive-retro.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Analyzes completed work to extract learnings that improve future cycles. The compound interest of AI-assisted development: every ship makes the next one faster and higher quality.
trigger → gather data → check deployment health → analyze patterns → extract learnings → save → propose updates
| Condition | Mode | Behavior |
|---|---|---|
--auto flag or invoked from ship workflow | Auto | Lightweight, no interaction, focuses on data |
| No arguments | Interactive | Guided reflection with user Q&A |
--metrics flag | Metrics | Show velocity dashboard only |
Runs automatically after /cks:ship completes. Reads artifacts, analyzes patterns,
saves learnings. No user interaction.
Read workflow: workflows/auto-retro.md
User-invoked via /cks:retro. Shows recent work summary, asks reflection questions,
combines user input with automated analysis.
Read workflow: workflows/interactive-retro.md
Quick dashboard of velocity metrics from .learnings/metrics.md.
Display:
Velocity Dashboard
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Phases completed: {total}
Avg phase duration: {time}
Retry rate: {%} ({retries}/{total_verifications})
Ship success rate: {%}
Conventions added: {count}
Gotchas documented: {count}
Recent velocity:
Phase {NN}: {name} — {duration} ({retries} retries)
Phase {NN}: {name} — {duration} ({retries} retries)
Phase {NN}: {name} — {duration} ({retries} retries)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
The retro can pull deployment logs and production metrics to enrich its analysis.
See references/observability-config.md for full configuration (sources: Vercel, Railway, Cloudflare, Supabase, LangSmith, webhook).
Files written to .learnings/: session-log.md (append-only), conventions.md (CLAUDE.md candidates), gotchas.md (pitfalls), metrics.md (velocity).
See references/output-formats.md for complete templates and field formats.
The retrospective agent proposes updates to CLAUDE.md and can auto-apply high-confidence ones.
Protocol:
.learnings/conventions.mdInteractive mode (/cks:retro):
3. Display each proposed convention to the user:
Proposed CLAUDE.md update:
## Always Follow These Rules
+ - {new convention}
Apply this? (yes / no / later)
Auto mode (--auto, after ship/autonomous):
3. High-confidence conventions (observed 2+ times, or directly from user retro feedback) → auto-apply to .learnings/conventions.md as "Applied" AND append to .claude/rules/learnings.md (auto-generated guardrail file)
4. Medium/low-confidence conventions → save as "Proposed" for next interactive retro
5. Display summary of what was auto-applied:
Auto-applied {N} convention(s) to .claude/rules/learnings.md:
- {convention 1}
- {convention 2}
CRITICAL: The learning must actually change behavior. Every "Applied" convention must appear
in either CLAUDE.md or .claude/rules/learnings.md so that agents (executor, planner, designer)
read and follow it in the next phase. Conventions that sit only in .learnings/conventions.md
are invisible to agents and useless.
When a gotcha is discovered (bug pattern, technology pitfall, domain-specific issue):
.learnings/gotchas.md with date, phase, and description| Integration | How |
|---|---|
/cks:ship | After ship completes → Skill(skill="retro", args="--auto") |
/cks:autonomous | After final ship → auto-retro on all phases |
| SessionStart hook | If .learnings/conventions.md has pending proposals → remind |
| Stop hook | If .learnings/session-log.md updated today → show count |
| Failure | Behavior |
|---|---|
No .prd/ directory | Skip PRD-specific analysis, do git-only analysis |
| No git history | Skip git analysis, only analyze PRD artifacts |
| Empty verification | Note "no verification data" in retro entry |
| CLAUDE.md doesn't exist | Propose creating it with discovered conventions |
| Observability source unavailable | Skip that source, note in session-log |
| Deploy logs show errors | Flag as GOTCHA, include error summary in session-log |
| LangSmith API key missing | Skip LLM observability, note "LLM traces not available" |
| No deploy detected | Skip deployment health entirely, note "no deployment found" |
This skill ships with opinionated defaults. Review and adapt to your needs:
/cks:release — edit SKILL.mdRead, Write, Edit, Grep, Glob, Bash. Add tools if needed.sonnet. Remove to use your default model.| Rationalization | Reality |
|---|---|
| "We don't have time for a retro" | Skipping retros means repeating mistakes. 15 minutes now saves hours next sprint. |
| "Everything went fine" | 'Fine' hides process improvements. Even successful sprints have learnings worth capturing. |
| "We'll remember for next time" | You won't. Write it down. Memory is unreliable; documented learnings compound. |
.claude/rules/learnings.md (agents read this automatically).