By 0mattias
Local file-backed memory MCP server with retrieval-on-demand. Memory is opt-in retrieval, not auto-injected context.
npx claudepluginhub 0mattias/bettermemory --plugin bettermemoryMemory compression system for Claude Code - persist context across sessions
Standalone image generation plugin using Nano Banana MCP server. Generates and edits images, icons, diagrams, patterns, and visual assets via Gemini image models. No Gemini CLI dependency required.
Streamline people operations — recruiting, onboarding, performance reviews, compensation analysis, and policy guidance. Maintain compliance and keep your team running smoothly.
Permanent coding companion for Claude Code — survives any update. MCP-based terminal pet with ASCII art, stats, reactions, and personality.
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.
Share bugs, ideas, or general feedback.
Persistent memory for Claude Code, retrieved on demand instead of force-fed into every prompt.
bettermemory is a Claude Code plugin (and a standalone MCP server for any other client) that fixes a problem common to most LLM memory features. Those tools auto-inject every stored fact into every conversation. They have no sense of which facts are stale, which are relevant, or which you would rather forget. The longer you use them, the more polluted unrelated conversations become. Ask for a Python tutorial, get answers tinted by your home-lab notes. Ask a generic shell question, get advice coloured by a preference you stated months ago. Stale facts get dispensed confidently.
bettermemory inverts the contract. The model calls memory_search only when it actually needs context. Every retrieval ships with three structured staleness signals (verification, path_drift, commit_drift) so the model can spot-check before relying on what it pulled up. Memories live as plain markdown and YAML on disk, so you can grep them, git log them, and hand-edit them. A separate health surface tells you what is dead weight and what has drifted, instead of letting the store grow into a haunted closet of half-true notes.
/plugin marketplace add 0Mattias/bettermemory
/plugin install bettermemory@bettermemory
That is it. Claude Code starts the MCP server, loads a system-prompt-level skill carrying the opt-in retrieval policy, and on the next turn the model has all 17 memory tools and the discipline to use them correctly. For other clients (Claude Desktop, Cursor, Continue, Cline) and manual setup, see § Other MCP clients below.
| Most memory features | bettermemory | |
|---|---|---|
| Retrieval | Auto-injected into every prompt | Model calls memory_search only when needed |
| Staleness awareness | None: facts are surfaced as if current | Three structured signals (verification, path_drift, commit_drift) on every retrieval |
| Storage | Opaque database | Plain markdown plus YAML on disk; works with grep, git, and any text editor |
| Curation tools | None: memory just grows | memory_health surfaces dead weight, contradictions, typo scopes, and verification debt |
| Deletes | Gone forever | Tombstones with reason, tombstone-aware dedup, reversible via memory_restore |
| Project scoping | Everything mixed together | Auto-scoped by git repo; cross-project queries are explicit |
| Inferences about you | Saved silently | Structural confirmation tier: the model asks before saving |
| Feedback loop | None | memory_record_use outcomes feed memory_health so dead weight surfaces automatically |
Day one. You tell Claude something:
"When I ask for a tutorial, I want runnable code, not screenshots of an IDE."
Claude calls memory_write(category="user-inference", scopes=["learning-style"], …). Because the memory captures a claim about you, the write goes pending. Claude asks: "Want me to remember that you prefer hands-on tutorials with runnable code?" You confirm. The fact lands at ~/.claude-memory/ as a markdown file you can read, edit, or delete.
Week two, in a fresh session. You ask:
"Walk me through pandas from zero to hero."
The phrase "zero to hero tutorial" is the kind of ambiguity stored preferences could resolve, so Claude calls memory_search, surfaces the stored learning-style memory, and tells you up front: "Using your stored preference for code-driven tutorials…" before answering. Compare with auto-injection memory, which would have done the same thing silently, even on "what is the capital of France?"
Month three. You ask about an unrelated tool:
"What is the difference between
findandfd?"
This question is generic. Claude does not call memory_search. The reply is pristine generic-shell prose, untainted by months of accumulated personal context. That is the whole design.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim