From citadel
Generates structured postmortem from campaign files, telemetry logs, feature ledger, and git history. Analyzes what broke, safety system catches, scope drift, and patterns after campaigns or incidents.
npx claudepluginhub sethgammon/citadel --plugin citadelThis skill uses the workspace's default tool permissions.
**Use when:** A campaign just completed and you want a structured analysis of what broke, what safety systems caught, and what patterns emerged. Also for ad-hoc incident analysis from recent git history.
Analyzes Antigravity AI coding sessions for root causes of scope deltas, rework patterns, hotspots, and agent/user issues. Outputs evidence-based reports with prompt and project health improvements.
Reviews completed coding sessions to extract actionable improvements: DX friction, documentation gaps, architecture issues, anti-patterns, bug prevention, and tooling updates.
Facilitates structured retrospectives and post-mortems via probing questions, timeline construction, root cause analysis, and recurring theme detection from past retros.
Share bugs, ideas, or general feedback.
Use when: A campaign just completed and you want a structured analysis of what broke, what safety systems caught, and what patterns emerged. Also for ad-hoc incident analysis from recent git history.
Don't use when: You want to preserve session context for the next conversation (use /session-handoff), extract reusable patterns from findings into the knowledge base (use /learn), or score and improve quality iteratively (use /improve).
One of:
Collect data from all available sources:
From the campaign file (if it exists):
From telemetry (.planning/telemetry/):
From git history:
From the session itself (if no campaign):
Identify patterns across the data:
What broke: List every failure, error, or unexpected outcome. For each: what happened, what caught it (hook/gate/human/nothing), what it cost (time, rework, tokens).
What the safety systems caught: Circuit breaker activations, quality gate blocks, anti-pattern warnings, typecheck failures. This is the "invisible value" section — problems prevented.
What drifted: Compare the campaign direction to what was built. Did scope expand? Did phases get skipped or reordered? Did the architecture change mid-build?
What patterns emerged: Recurring error types, files that kept needing fixes, phases that took longest, common anti-patterns.
Write to .planning/postmortems/postmortem-{slug}-{date}.md:
# Postmortem: {Campaign Name or Session Description}
> Date: {ISO date}
> Campaign: {path to campaign file, or "ad-hoc session"}
> Duration: {time from first to last commit}
> Outcome: {completed | partial | parked}
## Summary
{2-3 sentences: what was attempted, what happened, what the result was}
## What Broke
{Numbered list. For each:}
### {N}. {Short description}
- **What happened:** {the failure}
- **Caught by:** {hook name / quality gate / human / nothing}
- **Cost:** {rework time, files affected, phases repeated}
- **Fix:** {what resolved it}
- **Infrastructure created:** {new hook rule, new anti-pattern, new end condition — or "none needed"}
## What Safety Systems Caught
{Things that WOULD have been problems without the hooks/gates}
| System | What It Caught | Times | Impact Prevented |
|--------|---------------|-------|-----------------|
| {hook/gate name} | {description} | {count} | {what would have happened} |
## Scope Analysis
- **Planned:** {what the campaign direction said}
- **Built:** {what the feature ledger shows}
- **Drift:** {none | minor | significant — with specifics}
## Patterns
{Recurring themes worth watching:}
- {pattern 1}
- {pattern 2}
## Recommendations
{Concrete next actions:}
1. {recommendation — e.g., "Add anti-pattern rule for X"}
2. {recommendation — e.g., "Phase Y needs tighter end conditions"}
## Numbers
| Metric | Value |
|--------|-------|
| Phases planned | {N} |
| Phases completed | {N} |
| Commits | {N} |
| Files changed | {N} |
| Circuit breaker trips | {N} |
| Quality gate blocks | {N} |
| Anti-pattern warnings | {N} |
| Rework cycles | {N} |
Output the HANDOFF block from the Exit Protocol, then suggest: Run /learn {campaign-slug} to extract patterns into the knowledge base.
Campaign not found: If the specified campaign file doesn't exist, check .planning/campaigns/ for the most recently modified campaign. If no campaigns exist, run in ad-hoc mode using recent git history and session context.
No telemetry data: Proceed without telemetry. Mark the "What Safety Systems Caught" table as "No telemetry available" and the Numbers section fields as "N/A". Don't manufacture data.
Partial campaign (parked or in-progress): Generate the postmortem with Outcome: partial. Document what was completed and what was parked. Include a "Remaining Work" section listing incomplete phases.
If .planning/postmortems/ does not exist: Create it before writing. If .planning/ itself doesn't exist, output the postmortem inline and note: "Run /do setup to initialize .planning/ for future storage."
---HANDOFF---
- Postmortem: {name}
- Document: .planning/postmortems/postmortem-{slug}-{date}.md
- Failures documented: {count}
- Safety catches: {count}
- Recommendations: {count}
- Reversibility: green — one file written to .planning/postmortems/; git rm to undo
---
After displaying the HANDOFF block, suggest: Run /learn {campaign-slug} to extract patterns into the knowledge base.