Review and clean up memory files to keep context costs low — triggered during cleanup, compact, or session end
From claude-toolkitnpx claudepluginhub johwer/marketplace --plugin claude-toolkitThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Memory cleanup belongs at session start — before work begins, when the context is clean and nothing is in-flight. Never during active work or compaction (you might delete something that's still needed).
Good triggers (session start):
/create-stories Step 0 (cleanup phase, before any tickets)/workspace-launch Step 0 (before fetching ticket)/infra-ticket Step 0 (before fetching ticket)Bad triggers (never here):
/compact — context is under pressure, memories may still be relevantThe bash script runs first (0 tokens). Only invoke the skill if warnings are found AND the user says yes.
wc -l ~/.claude/projects/*/memory/MEMORY.md
Threshold: > 150 lines is getting close to the 200-line truncation limit.
Action: Suggest consolidating or removing entries that:
For each memory file in the memory directory:
ls -la ~/.claude/projects/*/memory/*.md
Check each file against current reality:
Stale indicators:
lsgrepThis file grows with every Dream Team session. It's not loaded automatically, but it gets referenced by /retro-proposals.
wc -l ~/.claude/projects/*/memory/dream-team-learnings.md
Threshold: > 500 lines
Action: Suggest running /retro-proposals to process learnings into destination files, then archive old entries:
/retro-proposals to route unprocessed learningsdream-team-learnings-archive-YYYY-MM.mdCalculate total cost of always-loaded memory:
# MEMORY.md tokens (loaded every prompt)
wc -w ~/.claude/projects/*/memory/MEMORY.md | awk '{printf "MEMORY.md: %d tokens\n", $1 * 1.3}'
Budget guideline:
| Component | Budget | Why |
|---|---|---|
| MEMORY.md | < 1,500 tokens | Loaded every single prompt |
| Individual memories | < 500 tokens each | Loaded on access |
| Total memory dir | < 15,000 tokens | Full read during cleanup |
If MEMORY.md exceeds 1,500 tokens, suggest:
Present findings as a short report:
🧹 Memory Health Check
MEMORY.md: 136 lines / 1,827 tokens (budget: 1,500) ⚠️ slightly over
Memory files: 10 files / 14,473 tokens total
dream-team-learnings: 615 lines — consider archiving processed entries
Suggestions:
1. [action] — [why] — saves ~X tokens
2. [action] — [why] — saves ~X tokens
Apply suggestions? (y/N)
/retro-proposals firstWhen a memory has been confirmed multiple times and is stable:
Memory file (cross-session learning)
↓ confirmed 3+ times, stable pattern
Conventions doc or CLAUDE.md (permanent, loaded automatically)
↓ memory file deleted (no longer needed)
Tokens saved: ~200-500 per prompt
This is the natural lifecycle: learn → remember → formalize → forget.