From citadel
Extracts successful patterns, failed patterns, key decisions, and quality rule candidates from completed campaign files, postmortems, and telemetry audit logs. Writes to knowledge base and appends rules to harness.json.
npx claudepluginhub sethgammon/citadel --plugin citadelThis skill uses the workspace's default tool permissions.
**Use when:** You have a completed campaign and want to extract successful patterns, failed patterns, key decisions, and quality rule candidates into the knowledge base.
Generates structured postmortem from campaign files, telemetry logs, feature ledger, and git history. Analyzes what broke, safety system catches, scope drift, and patterns after campaigns or incidents.
Extracts reusable learnings from session history patterns. Modes: analyze (extract), review (edit/manage), list (display active). Manages .orchestrator/metrics/learnings.jsonl.
Extracts decisions, lessons, patterns, and surprises from completed phase artifacts (PLAN.md, SUMMARY.md, VERIFICATION.md, UAT.md, STATE.md) into LEARNINGS.md. Use after finishing workflow phases.
Share bugs, ideas, or general feedback.
Use when: You have a completed campaign and want to extract successful patterns, failed patterns, key decisions, and quality rule candidates into the knowledge base.
Don't use when: You want a structured incident analysis first (use /postmortem — run it before /learn), you need a context transfer for the next session (use /session-handoff), or you haven't completed a campaign yet (nothing to extract).
/learn runs on the most recently completed campaign/learn {slug} runs on a specific campaign/learn — most recently completed campaign
/learn {slug} — specific campaign by slug
/learn {file-path} — specific campaign file path
.planning/postmortems/ (optional).planning/telemetry/audit.jsonl filtered to this campaignIf /learn (no argument):
.planning/campaigns/completed/*.md or .planning/campaigns/*.md
where Status: completedIf /learn {slug}:
.planning/campaigns/ for a file whose name contains {slug}.planning/campaigns/completed/Campaign file (required):
Postmortem (optional):
.planning/postmortems/ for files matching *{slug}*Audit telemetry (optional):
.planning/telemetry/audit.jsonlExtract four categories from gathered sources:
A. Successful Patterns — approaches/decisions that demonstrably worked (phases completed without rework, postmortem positives, unrevert commits). Per pattern: name, description, evidence (phase/commit/log), applicability.
B. Failed Patterns (Anti-patterns) — what was tried and failed (rework phases, circuit breaker trips, quality gate blocks, reverted commits). Per anti-pattern: name, description, failure mode, evidence, avoidance.
C. Key Decisions — from campaign Decision Log or inferred from phase descriptions. Per decision: what was decided, rationale, outcome (completed vs. rework).
D. Quality Rule Candidates — only generate a rule if: specific regex (not vague principle), applies to a specific file pattern, occurred more than once or was severe. Per candidate: regex, file pattern, trigger message, confidence (high/medium/low — skip low).
Create .planning/knowledge/{slug}-patterns.md with sections: header (extracted date, campaign path, postmortem path or "none"), ## Successful Patterns (name, description, evidence, applicability per pattern), ## Key Decisions (table: decision | rationale | outcome).
Create .planning/knowledge/{slug}-antipatterns.md with sections: header, ## Failed Patterns (name, what was done, failure mode, evidence, avoidance per pattern).
Create .planning/knowledge/ if it does not exist.
For each high/medium-confidence rule candidate:
.claude/harness.json (create with {} if missing)qualityRules.custom to [] if absentpattern already exists{ "name": "auto-{slug}-{N}", "pattern": "{regex}", "filePattern": "{glob}", "message": "Learned from campaign {slug}: {message}" }Skip low-confidence rules — a bad rule firing on innocent code is worse than no rule.
=== /learn: {Campaign Slug} ===
Sources: campaign {path} | postmortem {path or "not found"} | {N} audit entries matched
Extracted: {N} patterns | {N} anti-patterns | {N} decisions | {N} rule candidates ({M} added, {K} skipped)
Files: .planning/knowledge/{slug}-patterns.md, {slug}-antipatterns.md
Rules added to harness.json: {M} (one line per rule)
Next: review .planning/knowledge/ and promote useful rules to CLAUDE.md for permanent enforcement.
No completed campaigns: Output message and stop.
No Decision Log: Extract decisions from phase descriptions; note "inferred from phase descriptions" in output.
harness.json missing: Create with only the qualityRules section; do not invent other fields.
Duplicate rule: Skip silently; count in "skipped — already exist".
Postmortem missing: Proceed without it; note in summary.
Large telemetry file: Read last 200 lines only.
Zero extractable patterns: Write knowledge files with empty sections and note "campaign may have been too brief." Do not skip file creation.
Disclosure: "Extracting learnings to .planning/evolve/{target}/. Creates new files only."
Reversibility: green — writes to .planning/knowledge/{slug}-*.md only; delete those files to undo
Trust gates:
/learn does not produce a full HANDOFF block (it is a utility, not a campaign). It outputs the summary block in Step 6 and then waits for the next command.