Use when finishing a project, reflecting after a sprint, or capturing lessons learned. Also triggers on 'retro', 'retrospective', 'lessons learned', 'what did we learn', 'post-mortem', or 'what worked and what did not'.
From pmnpx claudepluginhub etusdigital/etus-plugins --plugin pmThis skill uses the workspace's default tool permissions.
knowledge/template.mdSearches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
This skill captures organizational knowledge that would otherwise be lost between projects. By conducting a structured retrospective interview, it produces a lessons-learned document that future discovery phases can reference. The goal is to compound knowledge: each project makes the next one better by surfacing patterns, anti-patterns, and decision outcomes.
Use full version when:
Use short version when:
Reference: .claude/skills/orchestrator/dependency-graph.yaml
BLOCKS (must exist — auto-invoke if missing):
ENRICHES (improves output — warn if missing):
docs/ets/projects/{project-slug}/discovery/project-context.md — Provides project name and scope for contextdocs/ets/projects/{project-slug}/planning/prd.md — Provides feature list for decision-outcome mappingdocs/ets/projects/{project-slug}/architecture/tech-spec.md — Provides ADRs for architectural decision reviewIf upstream documents exist, read them to pre-populate context and ask more targeted questions. If they don't exist, the interview still works — it just relies more on user memory.
MANDATORY: This skill MUST write its artifact to disk before declaring complete.
mkdir -p if neededIf the Write fails: Report the error to the user. Do NOT proceed.
This skill follows the ETUS interaction standard. Your role is a thinking partner, not an interviewer — suggest alternatives, challenge assumptions, and explore what-ifs instead of only extracting information. The retrospective interview is intentionally reflective — give the user space to think. These are not rapid-fire questions.
One question per message — Ask one question, wait for the answer, then ask the next. Retrospectives benefit from slow, thoughtful responses.
3-4 suggestions for choices — When asking about patterns or categories, present concrete examples to help the user think. Example: "What kind of pattern was this? (A) Process pattern — how the team worked, (B) Technical pattern — an architecture or code approach, (C) Communication pattern — how decisions were made and shared, (D) Something else."
Propose approaches before generating — Before writing each section, briefly describe your framing: "For the 'What Didn't Work' section, I'll focus on systemic issues rather than individual blame. Does that approach work for you?"
Present output section-by-section — Present each section for approval before moving on.
Track outstanding questions — If something can't be answered now, note it as a follow-up item in the retrospective.
Multiple handoff options — At completion, present next steps.
Resume existing work — Before starting, check if the target artifact already exists at the expected path. If it does, ask the user: "I found an existing retrospective at [path]. Should I continue from where it left off, or start fresh?" If resuming, read the document, summarize the current state, and continue from outstanding gaps.
This skill reads and writes persistent memory to maintain context across sessions.
On start (before any interaction):
docs/ets/.memory/project-state.md — know where the project isdocs/ets/.memory/decisions.md — don't re-question closed decisionsdocs/ets/.memory/preferences.md — apply user/team preferences silentlydocs/ets/.memory/patterns.md — apply discovered patternsOn finish (after saving artifact, before CLOSING SUMMARY):
project-state.md is updated automatically by the PostToolUse hook — do NOT edit it manually.python3 .claude/hooks/memory-write.py decision "<decision>" "<rationale>" "<this-skill-name>" "<phase>" "<tag1,tag2>"python3 .claude/hooks/memory-write.py preference "<preference>" "<this-skill-name>" "<category>"python3 .claude/hooks/memory-write.py pattern "<pattern>" "<this-skill-name>" "<applies_to>"The .memory/*.md files are read-only views generated automatically from memory.db. Never edit them directly.
Conduct the retrospective as a reflective conversation. Ask one question at a time, listen carefully, and follow up where the user's answer reveals something worth exploring deeper.
Before starting the interview:
docs/ets/projects/{project-slug}/discovery/project-context.md exists — read for project name and scopedocs/ets/projects/{project-slug}/planning/prd.md exists — read for feature listdocs/ets/projects/{project-slug}/architecture/tech-spec.md exists — read for ADR listdocs/ets/projects/{project-slug}/learnings/ — read to avoid asking about already-captured learningsIf documents exist, use them to ask more specific questions (e.g., "ADR-3 chose PostgreSQL over MongoDB — how did that decision play out?").
Primary question (ask alone, one message):
"Looking back at this project, what went well? What are you proud of?"
Follow-up probes — ask one at a time based on the answer:
Primary question (ask alone, one message):
"What didn't go well? What was frustrating or harder than expected?"
Follow-up probes — ask one at a time based on the answer:
Primary question (ask alone, one message):
"Did you notice any recurring patterns — things that kept coming up, either positively or negatively?"
Follow-up probes — ask one at a time based on the answer:
Primary question (ask alone, one message):
"Think about the key decisions made during this project — which ones paid off, and which ones backfired?"
Follow-up probes — ask one at a time based on the answer:
Primary question (ask alone, one message):
"If you could go back to the start of this project with everything you know now, what would you do differently?"
Follow-up probes — ask one at a time based on the answer:
After all 5 questions, present a brief synthesis of the key learnings and ask: "Does this capture the most important lessons? Anything I missed?"
Each learning gets a unique ID:
LEARN-# — Sequential learning ID (LEARN-1, LEARN-2, LEARN-3...)architecture, data, ux, api, process, communication, testing, deployment, scope, estimation, toolingExample:
LEARN-1 [architecture, process]: PostgreSQL was the right choice for our relational data model, but we should have set up read replicas from the start.
LEARN-2 [estimation, scope]: Feature X took 3x longer than estimated because we underestimated the authentication complexity.
Path: docs/ets/projects/{project-slug}/learnings/retro-{slug}.md
Where {slug} is derived from:
The document follows the template in knowledge/template.md.
knowledge/template.md for the retrospective document template and structure.This skill has no strict upstream dependencies. It works best with existing project documents but can run purely from interview data.
If upstream documents exist:
Before marking this document as COMPLETE:
If any check fails, mark as DRAFT with <!-- STATUS: DRAFT --> at top.
docs/ets/projects/{project-slug}/learnings/retro-{slug}.md) + paths to upstream documents (none — retrospectives have no BLOCKS dependencies)"Document saved to
docs/ets/projects/{project-slug}/learnings/retro-{slug}.md. The spec reviewer approved it. Please review and let me know if you want any changes before we proceed." Wait for the user's response. If they request changes, make them and re-run the spec review. Only proceed to validation after user approval.
After saving and validating, display:
Retrospective saved to `docs/ets/projects/{project-slug}/learnings/retro-{slug}.md`
Status: [COMPLETE | DRAFT]
Learnings captured: LEARN-1 through LEARN-N
Tags covered: [list of unique tags]
Then present these options:
/start-projectWait for the user to choose before taking any action.
| Error | Severity | Recovery | Fallback |
|---|---|---|---|
| No upstream documents found | Info | Run interview from user memory only | Proceed — learnings are still valuable |
| User provides very short answers | Medium | Ask follow-up probes | Accept minimal input, mark as DRAFT |
| Prior retrospective exists for same project | Low | Ask: "Update existing or create new?" | Default to create new with date suffix |
| Output validation fails | Medium | Mark as DRAFT, flag thin sections | Proceed — partial learnings are better than none |