From ce
Reviews completed coding sessions to extract actionable improvements: DX friction, documentation gaps, architecture issues, anti-patterns, bug prevention, and tooling updates.
npx claudepluginhub rileyhilliard/claude-essentials --plugin ceThis skill uses the workspace's default tool permissions.
Review a completed session or task to assess how it went and extract improvements. This covers both "how did we do" evaluation and "what can we learn" investigation.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Review a completed session or task to assess how it went and extract improvements. This covers both "how did we do" evaluation and "what can we learn" investigation.
After completing any non-trivial task or session. The goal is continuous improvement:
Load the relevant reference based on what you're investigating:
| Situation | Load | File |
|---|---|---|
| Agent hit friction during execution (wrong files, bad assumptions, unclear conventions) | DX Friction | references/dx-friction.md |
| Documentation (READMEs, comments, API docs) was wrong, incomplete, or misleading | Documentation Gaps | references/documentation-gaps.md |
| Code was hard to understand or things were in unexpected places | Architecture Clarity | references/architecture-clarity.md |
| A bug was fixed but the root cause suggests a process gap | Bug Prevention | references/bug-prevention.md |
| Code works but diverges from best practices, idioms, or established conventions | Anti-Patterns | references/anti-patterns.md |
Skills, hooks, commands, agents, or .claude/ configs need updating | Tooling Improvements | references/tooling-improvements.md |
Load multiple references when the session spans investigation types.
Every post-mortem follows four steps regardless of investigation type.
Walk through the session timeline. For each significant step, note:
Don't editorialize yet. Just document the sequence.
Evaluate how the session went overall:
Flag anything that felt harder than it should have been, even if it ultimately succeeded.
For each friction point, ask: "What would have prevented this?"
Push past the first answer. "I should have read the file more carefully" is a symptom. "The file's name doesn't indicate what it contains" or "there's no convention documented for where this type of code lives" is a systemic cause.
Good root causes point to something fixable:
Bad root causes are just descriptions of what went wrong:
Every finding should produce one of these:
Each action should have a specific file path and description of the change. Vague actions like "improve documentation" are not useful.
## Session Post-Mortem
### What Happened
[Timeline of the session with key decision points]
### Execution Assessment
- **Outcome:** [What was delivered vs what was asked]
- **Efficiency:** [Direct path or detours? Where and why?]
- **What worked well:** [Patterns, skills, or approaches worth repeating]
### Findings
#### Finding 1: [Title]
**What happened:** [The friction or issue, stated factually]
**Root cause:** [The systemic issue underneath]
**Action:** [Specific change with file path]
**Priority:** [High/Medium/Low based on how often this would recur]
#### Finding 2: [Title]
...
### Summary
[1-2 sentences: biggest takeaway and what should change first]