From agent-almanac
Audits, classifies, and selectively prunes Claude Code agent memories by type, age, access frequency, staleness, and fidelity using decision trees and audit trails. Use when memory grows large, project state shifts, or retrieval degrades.
npx claudepluginhub pjt222/agent-almanacThis skill is limited to using the following tools:
Audit, classify, and selectively forget stored memories. Memory is infrastructure. Forgetting is policy. This skill defines the policy.
Organizes, extracts, prunes, and verifies Claude Code persistent memory files: MEMORY.md index, topic extraction, staleness detection, accuracy checks. Use near 200-line limit, after insights, or project changes.
Runs memory maintenance: verifies IDs/duplicates, reconciles metadata, demotes working entries over 1500-word budget by score, cleans superseded entries.
Audits Pensyve memories for staleness (>30 days unaccessed or low retrievability), contradictions, low confidence (<0.5), and consolidation candidates, then offers confirmed cleanup actions. Use periodically.
Share bugs, ideas, or general feedback.
Audit, classify, and selectively forget stored memories. Memory is infrastructure. Forgetting is policy. This skill defines the policy.
Where manage-memory focuses on organizing and growing memory (what to keep, how to structure it), this skill focuses on the inverse: what to discard, how to detect decay, and how to ensure that forgetting is deliberate rather than accidental. The two skills are complementary and should be used together during periodic maintenance.
~/.claude/projects/<project-path>/memory/)Read all memory files and classify each entry by four dimensions.
# Inventory the memory directory
ls -la <memory-dir>/
wc -l <memory-dir>/*.md
# Count total entries (approximate by counting top-level bullets and headers)
grep -c "^- \|^## " <memory-dir>/MEMORY.md
for f in <memory-dir>/*.md; do echo "$f: $(grep -c '^- \|^## ' "$f") entries"; done
Classify each memory entry into one of these types:
| Type | Description | Example | Default retention |
|---|---|---|---|
| Project | Facts about project structure, architecture, conventions | "skills/ has 310 SKILL.md files across 55 domains" | Keep until verified stale |
| Decision | Choices made and their rationale | "Chose hub-and-spoke over sequential for review teams because..." | Keep indefinitely |
| Pattern | Debugging solutions, workflow insights, recurring behaviors | "Exit code 5 means quoting error — use temp files" | Keep until superseded |
| Reference | Links, version numbers, external resources | "mcptools docs: https://..." | Keep until verified stale |
| Feedback | User preferences, corrections, style guidance | "User prefers kebab-case for file names" | Keep indefinitely |
| Ephemeral | Session-specific context that leaked into persistent memory | "Currently working on issue #42" | Prune immediately |
For each entry, also note:
Expected: A complete inventory with every memory entry classified by type, with age and access frequency estimates. Ephemeral entries are already flagged for immediate removal.
On failure: If memory files are too large or unstructured to classify entry-by-entry, work at the section level. Classify entire sections rather than individual bullets. The goal is coverage, not granularity.
Compare memory claims against current project state. Staleness is the most common form of memory decay.
Check for these staleness patterns:
# Spot-check counts against source of truth
grep -oP '\d+ skills' <memory-dir>/MEMORY.md
grep -c "^ - id:" skills/_registry.yml
# Check for references to files that no longer exist
grep -oP '`[^`]+\.(md|yml|R|js|ts)`' <memory-dir>/MEMORY.md | sort -u | while read f; do
path="${f//\`/}"
[ ! -f "$path" ] && echo "STALE: $path referenced but not found"
done
# Check for references to old names/paths
grep -i "old-name\|previous-name\|renamed-from" <memory-dir>/*.md
Mark each stale entry with the type of staleness and the current correct value.
Expected: A list of stale entries with specific evidence of what changed. Each stale entry has a recommended action: update (if the correct value is known), verify (if uncertain), or prune (if the entire entry is obsolete).
On failure: If you cannot verify a claim because it references external state (APIs, third-party docs, deployment status), mark it as unverifiable rather than assuming it is correct. Unverifiable entries are candidates for pruning if they are not actively useful.
Test whether memories still produce useful context when retrieved. This is the hardest step because an agent cannot verify whether its own compressed memories are faithful — you need external anchors.
Fidelity check methods:
Round-trip verification: Read a memory entry, then check the actual project state it describes. Does the memory lead you to the right file, the right pattern, the right conclusion?
Compression loss detection: Compare memory summaries against the original source material. When a 50-line discussion was compressed to a 2-line memory, did the compression preserve the actionable insight or just the topic label?
# Find the source that a memory entry was derived from
# (git log, old PRs, original files)
git log --oneline --all --grep="<keyword from memory entry>" | head -5
Contradiction scan: Search for memories that contradict each other or contradict CLAUDE.md / project documentation.
# Look for potential contradictions in counts
grep -n "total" <memory-dir>/MEMORY.md
grep -n "total" CLAUDE.md
# Compare the values — they should agree
Utility test: For each memory entry, ask: "If this entry were deleted, would anything go wrong in the next 5 sessions?" If the answer is "probably not," the entry has low fidelity value regardless of accuracy.
Expected: Each memory entry now has a fidelity assessment: high (verified accurate and useful), medium (probably accurate, occasionally useful), low (unverified or rarely useful), or failed (verified inaccurate or contradictory).
On failure: If fidelity checks are inconclusive for many entries, focus on the entries with the highest potential impact. A wrong memory about project architecture is more dangerous than a wrong memory about a debugging trick. Prioritize checking skeleton-level facts over flesh-level details.
Use this decision tree to determine what to prune, in priority order:
Pruning Decision Tree (apply in order):
1. EPHEMERAL entries (Step 1 classification)
→ Delete immediately. These should never have been persisted.
2. FAILED fidelity entries (Step 3)
→ Delete immediately. Inaccurate memories are worse than no memories.
3. DUPLICATES
→ Keep the most complete/accurate version, delete others.
→ If duplicates span MEMORY.md and a topic file, keep the topic file version.
4. STALE entries with known corrections (Step 2)
→ UPDATE if the entry is otherwise useful (change the stale value to current).
→ DELETE if the entire entry is obsolete (the topic no longer matters).
5. LOW fidelity, low access frequency entries
→ Delete. These are taking space without providing value.
6. MEDIUM fidelity entries about completed/closed work
→ Archive or delete. Past sprint details, resolved incidents, merged PRs.
→ Exception: keep if the resolution contains a reusable pattern.
7. REFERENCE entries with freely available sources
→ Delete if the reference is a Google search away.
→ Keep if the reference is hard to find or has project-specific context.
For each deletion, record the entry, its classification, and the reason for deletion (used in Step 7).
Before applying any DELETE action from this tree, check whether the entry warrants inoculation (Step 5). Failed strategies, abandoned approaches, and dangerous patterns are candidates for delete + inoculate rather than delete-only.
Expected: A clear list of entries to delete, entries to update, and entries to keep — each with a documented reason. The keep/delete ratio depends on memory health; a well-maintained memory might prune 5-10%, a neglected one might prune 30-50%.
On failure: If the decision tree produces ambiguous results for many entries, apply a tighter filter: "Would I write this entry today, knowing what I know now?" If not, it is a deletion candidate. Err toward pruning — it is easier to re-learn a fact than to work around a wrong memory.
Some abandoned conclusions cannot be safely deleted. Deletion alone fails when the memory-generating conditions persist — the system rebuilds the deleted memory from the same inputs along the same reasoning path. For these cases, write a counter-memory that prevents re-derivation alongside (or instead of) deletion.
Decision rule — delete-only vs. delete + inoculate vs. inoculate-only:
| Memory category | Action | Why |
|---|---|---|
| Stale fact, outdated pointer, expired context | Delete-only | Retrieval cleanup; no behavioral risk if regenerated |
| Failed strategy, dangerous pattern, abandoned approach with persistent triggers | Delete + inoculate | The reasoning path will regenerate the conclusion otherwise |
| Decision later overridden but original rationale matters | Inoculate-only | Preserve original entry; add SUPERSEDED counter-memory pointing to it |
SUPERSEDED record format (frontmatter for auto-memory; structure adapts to other memory systems):
---
name: superseded-<short-id>
description: Counter-memory preventing re-derivation of <pattern>
type: superseded
---
SUPERSEDED <YYYY-MM-DD>
Pattern: <what was tried — describe the conclusion or strategy>
Period: <start> to <end>
Evidence: <what happened — concrete data, not narrative>
Abandonment reason: <specific cause; not "did not work">
Do not re-derive from: <signal types or input patterns that previously led here>
Supersedes: <path to original memory if delete + inoculate, or N/A>
Place SUPERSEDED records as their own files in the memory directory (e.g., superseded_strategy_X.md) so they appear in retrieval alongside active memories. The counter-memory becomes the enacted change mechanism: when a similar signal arrives, the SUPERSEDED record surfaces and blocks the regeneration path.
When NOT to inoculate:
Inoculation hygiene:
Pattern and Do not re-derive from specific. Vague counter-memories ("don't try complicated solutions") are noise.Expected: For every Step 4 deletion candidate involving abandoned strategies or dangerous patterns, a corresponding SUPERSEDED counter-memory file is created before the original entry is deleted. The pruning log records both the deletion and the inoculation. Active memory remains lean while the regeneration paths are blocked.
On failure: If unsure whether an entry warrants inoculation, default to inoculate. A redundant SUPERSEDED record costs little; a regenerated bad pattern costs much more. If the SUPERSEDED list grows large enough to be noise itself, that is a signal to investigate the upstream conditions producing repeated abandonments — the fix is at the input layer, not the memory layer.
Define "what NOT to save" rules to prevent future memory pollution. Review existing memories for patterns that should have been filtered at write time.
Patterns that should never become persistent memories:
| Pattern | Why | Example |
|---|---|---|
| Session-specific task state | Stale by next session | "Currently debugging issue #42" |
| Intermediate reasoning | Not a conclusion | "Tried approach A, didn't work because..." |
| Debug output / stack traces | Ephemeral diagnostic data | "Error was: TypeError at line 234..." |
| Exact command sequences | Brittle, version-dependent | "Run npm install foo@3.2.1 && ..." |
| Emotional/tonal notes | Not actionable | "User seemed frustrated" |
| Duplicates of CLAUDE.md | Already in system prompt | "Project uses renv for dependencies" |
| Unverified single observations | May be wrong | "I think the API rate limit is 100/min" |
If any of these patterns are found in existing memory, add them to the deletion list from Step 4.
Document the filter rules in MEMORY.md or a retention-policy.md topic file so future sessions can reference them before writing new memories.
Expected: A set of preemptive filter rules documented in the memory directory. Any existing entries matching these patterns are flagged for deletion.
On failure: If documenting filter rules feels premature (memory is small, pollution is minimal), skip the documentation but still apply the filters to catch any existing violations. The rules can be formalized later when the memory directory is more mature.
Log every deletion so the forgetting itself is reviewable. Create or update a pruning log.
<!-- In <memory-dir>/pruning-log.md or appended to MEMORY.md -->
## Pruning Log
### YYYY-MM-DD Audit
- **Entries audited**: N
- **Entries pruned**: M (X%)
- **Entries updated**: K
- **Staleness found**: [list of stale patterns detected]
- **Fidelity failures**: [list of entries that failed verification]
#### Deletions
| Entry (summary) | Type | Reason |
|-----------------|------|--------|
| "Currently working on issue #42" | Ephemeral | Session-specific, stale |
| "skills/ has 280 SKILL.md files" | Project | Count drift: actual is 310 |
| "Use acquaint::mcp_session()" | Pattern | Package renamed to mcptools |
Keep the pruning log concise. It exists for accountability, not archaeology. If the log itself grows large, summarize older entries: "2025: 3 audits, 47 total entries pruned (mostly count drift and ephemeral leakage)."
Expected: A timestamped pruning log entry documenting what was deleted and why. The log is stored in the memory directory alongside the memories themselves.
On failure: If creating a separate log file feels excessive (only 1-2 entries pruned), add a brief note to MEMORY.md instead: <!-- Last pruned: YYYY-MM-DD, removed 2 stale entries -->. Any record is better than silent deletion.
Certain memory entries should be immune from pruning regardless of age, access frequency, or fidelity score. These represent irreplaceable context that, if lost, would require significant effort to reconstruct.
Protected memory criteria:
| Category | Examples | Why protected |
|---|---|---|
| Architecture decisions | "Chose flat skill directory over nested" | Rationale is lost if re-derived later |
| User identity preferences | "Always use kebab-case," "Never auto-commit" | Explicit user intent, not inferrable |
| Security audit results | "Last audit: 2025-12-13 — PASSED" | Compliance evidence with timestamps |
| Rename/migration records | "Repo renamed: X to Y on date Z" | Cross-reference integrity depends on this |
Designation method: Mark protected entries with <!-- PROTECTED --> inline or maintain a protected list in the pruning log. The decision tree in Step 4 must check for protected status before applying any deletion rule.
Unprotecting: To prune a protected entry, explicitly remove the designation first and document the reason in the pruning log. This two-step process prevents accidental deletion of high-value memories.
Expected: Protected entries survive all prune passes. The pruning log records any protection additions or removals.
On failure: If the protected set grows too large (>30% of total entries), review the criteria — protection is for irreplaceable context, not for "important" entries. Important but reconstructible facts should remain subject to normal pruning.
After deletion, remaining memories may be fragmented — cross-references point to deleted entries, topic files lose coherence, and MEMORY.md may have gaps. Re-synthesis restores structural integrity.
Re-synthesis checklist:
Expected: Post-pruning memory is structurally sound — no orphan references, no redundant fragments, no incoherent topic files. Cold entries are classified for future pruning decisions.
On failure: If re-synthesis reveals that pruning was too aggressive (critical context was lost), check the pruning log and reconstruct from the audit trail. This is why the audit trail exists.
Memory drift occurs when stored facts become silently wrong — not because they were always wrong, but because the underlying reality changed and the memory was not updated. Drift recovery attempts to fix memories in-place rather than pruning them.
Drift detection triggers:
Recovery procedure:
[corrected YYYY-MM-DD] annotationunverifiable and flag for pruningExpected: Drifted memories are corrected in-place where possible, preserving context. Entries that cannot be corrected are flagged for pruning. Prevention rules reduce future drift.
On failure: If drift is widespread (>20% of entries), the memory may need a full rebuild rather than incremental correction. In that case, archive the current memory directory, start fresh, and selectively re-import entries that pass verification.
manage-memory — the complementary skill for organizing and growing memory; use together for complete memory maintenancemeditate — clearing and grounding that may reveal which memories are creating noiserest — sometimes the best memory maintenance is not doing memory maintenanceassess-context — evaluating reasoning context health, which memory quality directly affects