Hipocampus plugin marketplace
npx claudepluginhub kevin-hs-sohn/hipocampusPersistent agent memory that survives across sessions — auto-compacting 3-tier memory with hybrid search. Your agent remembers what it learned, decided, and built.
No description available.
RuFlo Marketplace: Claude Code native agents, swarms, workers, and MCP tools for continuous software engineering
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Share bugs, ideas, or general feedback.
Drop-in proactive memory harness for AI agents. Zero infrastructure — just files.
One command to set up. Works immediately with Claude Code, OpenCode, and OpenClaw.
Evaluated on MemAware — 900 implicit context questions across 3 months of conversation history. The agent must proactively surface relevant past context that the user never explicitly asks about.
| Method | Easy (n=300) | Medium (n=300) | Hard (n=300) | Overall |
|---|---|---|---|---|
| No Memory | 1.0% | 0.7% | 0.7% | 0.8% |
| BM25 Search | 4.7% | 1.7% | 2.0% | 2.8% |
| BM25 + Vector Search | 6.0% | 3.7% | 0.7% | 3.4% |
| Hipocampus (tree only) | 14.7% | 5.7% | 7.3% | 9.2% |
| Hipocampus + BM25 | 18.7% | 10.0% | 5.7% | 11.4% |
| Hipocampus + Vector | 26.0% | 18.0% | 8.0% | 17.3% |
| Hipocampus + Vector (10K ROOT) | 34.0% | 21.0% | 8.0% | 21.0% |
Hipocampus + Vector is 21.6x better than no memory and 5.1x better than search alone. On hard questions (cross-domain, zero keyword overlap), Hipocampus scores 8.0% vs 0.7% for vector search — 11.4x better. Search structurally cannot find these connections; the compaction tree can.
Increasing the ROOT.md budget from 3K to 10K tokens (120 topics vs 39) improves Easy from 26% to 34% and overall from 17.3% to 21.0% — more topic coverage means more connections found. Hard tier remains at 8.0%, indicating cross-domain reasoning is bottlenecked by the answer model, not the index size.
/plugin marketplace add kevin-hs-sohn/hipocampus
/plugin install hipocampus@kevin-hs-sohn/hipocampus
Then run npx hipocampus init for full setup.
npx hipocampus init
npx hipocampus init --no-vector # BM25 only (saves ~2GB disk)
npx hipocampus init --no-search # Compaction tree only, no qmd
npx hipocampus init --platform claude-code # Override platform detection
AI agents forget everything between sessions. The obvious solutions — RAG, long context windows, memory files — each solve part of the problem. But they all miss the hardest part: knowing that relevant context exists when nobody asked about it.
You ask your agent: "Refactor this API endpoint for the new payment flow."
Three weeks ago, you and the agent had a long discussion about API rate limiting and decided on a token bucket strategy. That decision is recorded in the session logs. But the agent doesn't know it exists — so it refactors the endpoint without considering rate limits. The payment flow starts dropping requests under load a week later.
This isn't a retrieval failure. The agent never searched for "rate limiting" because the user asked about "payment flow." There is no search query that connects these. The connection only exists if the agent has a holistic view of its own knowledge.
Large context windows (200K–1M tokens): You could dump all history into context. But attention degrades with length — important details from three weeks ago get drowned by noise. And every API call pays for the full context. At 500K tokens per call, costs become prohibitive.
RAG (vector search, BM25): Powerful when you know what to search for. But search requires a query, and a query requires suspecting that relevant context exists. Our MemAware benchmark confirms: BM25 search scores just 2.8% on implicit context — barely better than no memory (0.8%), while consuming 5x the tokens. Search is a precision tool for known unknowns. It cannot help with unknown unknowns.
Memory files (MEMORY.md, auto memory): Good for the first week. After a month, hundreds of decisions and insights can't fit in a system prompt. You're forced to choose what to keep, and the agent doesn't know what it has forgotten.
Hipocampus maintains a ~3K token topic index (ROOT.md) that compresses your entire conversation history into a scannable overview — like a table of contents for everything the agent has ever discussed. This is auto-loaded into every session.
When a request comes in, the agent already sees all past topics at zero search cost. It notices connections that search would miss — "this refactoring task relates to the rate limiting decision from three weeks ago" — and retrieves specific details on demand via search or tree traversal.
The effect is similar to injecting your full history into every API call, at a fraction of the token cost.
Like a CPU cache hierarchy:
Layer 1 — Hot (always loaded, ~3K tokens)