Compact learnings by merging duplicates, boosting confidence on repeated patterns, and pruning stale entries. LLM performs semantic analysis.
Merges duplicate learnings and removes stale entries using semantic analysis. Run this periodically to maintain a high-quality, efficient knowledge base.
/plugin marketplace add saadshahd/moo.md/plugin install hope@moo.mdDeduplicate and merge learnings files using semantic analysis.
cat ~/.claude/learnings/failures.jsonl 2>/dev/null
cat ~/.claude/learnings/discoveries.jsonl 2>/dev/null
cat ~/.claude/learnings/constraints.jsonl 2>/dev/null
If files are empty or missing, report "No learnings to compact" and exit.
For each category, analyze entries and identify:
Duplicates to merge:
applies_to tags with same discoveryMerge strategy:
applies_to arrays (union)Entries to prune:
DO NOT use fixed rules like "90 days old = prune". Assess each entry's current relevance semantically.
Before writing changes:
BACKUP_DIR=~/.claude/learnings/backup-$(date +%Y%m%d-%H%M%S)
mkdir -p "$BACKUP_DIR"
cp ~/.claude/learnings/*.jsonl "$BACKUP_DIR/" 2>/dev/null
For each category, write the compacted JSONL:
# Clear and rewrite (atomic operation)
cat > ~/.claude/learnings/discoveries.jsonl << 'JSONL'
{"ts":"...","context":"...","discovery":"...","confidence":0.X,"applies_to":["..."]}
...
JSONL
Output a detailed report explaining decisions:
## Compaction Report
Backup created: ~/.claude/learnings/backup-YYYYMMDD-HHMMSS/
### discoveries.jsonl: N → M entries
**Merged:**
- "insight A" + "insight B" → "merged insight" (confidence 0.80 → 0.85)
- Reason: Same discovery about [topic], different wording
**Pruned:**
- "old specific insight"
- Reason: Superseded by more general learning
**Kept:** X entries unchanged
### failures.jsonl: N → M entries
[Similar format]
### constraints.jsonl: N → M entries
[Similar format]
---
Total: X entries → Y entries (Z merged, W pruned)