Extracts reusable patterns from the session, self-evaluates quality with checklist and verdict, determines Global or Project save location, and saves approved skills.
npx claudepluginhub divenhuang88/everything-claude-code# /learn-eval - Extract, Evaluate, then Save Extends `/learn` with a quality gate, save-location decision, and knowledge-placement awareness before writing any skill file. ## What to Extract Look for: 1. **Error Resolution Patterns** — root cause + fix + reusability 2. **Debugging Techniques** — non-obvious steps, tool combinations 3. **Workarounds** — library quirks, API limitations, version-specific fixes 4. **Project-Specific Patterns** — conventions, architecture decisions, integration patterns ## Process 1. Review the session for extractable patterns 2. Identify the most valuable...
/learn-evalExtracts reusable patterns from the session, self-evaluates quality with checklist and verdict, determines Global or Project save location, and saves approved skills.
/learn-evalExtracts reusable patterns from conversation, self-evaluates quality across 5 dimensions, drafts markdown skill file, determines global or project save location, and saves after user confirmation.
/learn-evalExtracts reusable patterns from the session, self-evaluates quality via checklist and verdict, determines Global or Project save location, and saves approved skills.
/learn-evalExtracts reusable patterns from the session, self-evaluates quality with checklist and verdict, determines Global or Project save location, and saves approved skills.
/learn-evalExtract reusable patterns from the session, self-evaluate quality before saving, and determine the right save location (Global vs Project).
/learn-evalExtract reusable patterns from the session with a quality gate before saving as instincts
Extends /learn with a quality gate, save-location decision, and knowledge-placement awareness before writing any skill file.
Look for:
Review the session for extractable patterns
Identify the most valuable/reusable insight
Determine save location:
~/.claude/skills/learned/): Generic patterns usable across 2+ projects (bash compatibility, LLM API behavior, debugging techniques, etc.).claude/skills/learned/ in current project): Project-specific knowledge (quirks of a particular config file, project-specific architecture decisions, etc.)Draft the skill file using this format:
---
name: pattern-name
description: "Under 130 characters"
user-invocable: false
origin: auto-extracted
---
# [Descriptive Pattern Name]
**Extracted:** [Date]
**Context:** [Brief description of when this applies]
## Problem
[What problem this solves - be specific]
## Solution
[The pattern/technique/workaround - with code examples]
## When to Use
[Trigger conditions]
Quality gate — Checklist + Holistic verdict
Execute all of the following before evaluating the draft:
~/.claude/skills/ and relevant project .claude/skills/ files by keyword to check for content overlapSynthesize the checklist results and draft quality, then choose one of the following:
| Verdict | Meaning | Next Action |
|---|---|---|
| Save | Unique, specific, well-scoped | Proceed to Step 6 |
| Improve then Save | Valuable but needs refinement | List improvements → revise → re-evaluate (once) |
| Absorb into [X] | Should be appended to an existing skill | Show target skill and additions → Step 6 |
| Drop | Trivial, redundant, or too abstract | Explain reasoning and stop |
Guideline dimensions (informing the verdict, not scored):
### Checklist
- [x] skills/ grep: no overlap (or: overlap found → details)
- [x] MEMORY.md: no overlap (or: overlap found → details)
- [x] Existing skill append: new file appropriate (or: should append to [X])
- [x] Reusability: confirmed (or: one-off → Drop)
### Verdict: Save / Improve then Save / Absorb into [X] / Drop
**Rationale:** (1-2 sentences explaining the verdict)
This version replaces the previous 5-dimension numeric scoring rubric (Specificity, Actionability, Scope Fit, Non-redundancy, Coverage scored 1-5) with a checklist-based holistic verdict system. Modern frontier models (Opus 4.6+) have strong contextual judgment — forcing rich qualitative signals into numeric scores loses nuance and can produce misleading totals. The holistic approach lets the model weigh all factors naturally, producing more accurate save/drop decisions while the explicit checklist ensures no critical check is skipped.