Learning loop plugin marketplace
npx claudepluginhub robinslange/learning-loopSelf-improving learning loop across Claude sessions — retrieval, capture, consolidation, and cross-project transfer
Share bugs, ideas, or general feedback.
A context engineering plugin for Claude Code. It teaches Claude how to work with what you know.
Episodic memory gives Claude recall. Learning-loop gives Claude judgment. It enforces source verification before anything lands in your vault. It gates promotion on quality scores. It writes in your voice, not its own. It surfaces what you already know before searching the web. The result is a knowledge system that compounds through discipline, not volume.
Claude fabricates sources. Measured rates: ~43% of PubMed IDs, ~26% of DOIs. Without mechanical verification, these contaminate your notes and propagate through every future session that retrieves them.
Claude writes like Claude. Without persona enforcement and capture rules, your vault fills with homogeneous LLM prose that sounds the same regardless of topic or domain.
Claude forgets process. It will skip verification, promote half-sourced notes, and synthesize before the evidence supports it. Hooks and quality gates make these failures structurally impossible, not a matter of prompt discipline.
Process enforcement through hooks. Ten lifecycle hooks fire automatically. A pre-write hook catches near-duplicates before they land. A post-write hook adds backlinks. Source verification runs at write time, not as an afterthought. The quality gate blocks promotion regardless of how good the prose sounds.
Four-signal hybrid search. BM25 + vector similarity + Personalized PageRank over your wikilink graph + IDF-weighted tag expansion, fused via RRF. Optional cross-encoder reranking. Graph signals surface bridge notes across domains that no single keyword or embedding would find. All runs in a single Rust binary.
13 specialized agents. Research, verification, gap analysis, note writing, and batch triage run in parallel. They share 18 skills covering promotion gating, cross-validation, blindspot detection, and source integrity. Lightweight agents run on Haiku. Research agents run on Sonnet.
A vault that earns its structure. Notes flow from inbox through fleeting to permanent. Six criteria gate each transition. Source integrity failures block promotion. The vault grows sharper because every note that reaches permanent status survived mechanical scrutiny.
You run /learning-loop:discovery "caffeine tolerance". The plugin searches your vault first. You already have three notes on caffeine mechanisms and a literature note on CYP1A2. It searches the web, checks sources against 11 academic APIs, catches a misattributed author on a real PMID, and writes atomic notes in your voice. It tells you what you already know and where the gaps are.
You find a paper. /learning-loop:literature "https://arxiv.org/abs/2307.03172" captures it without breaking flow.
Before closing: /learning-loop:reflect. Learnings route to the right stores. Notes that pass the quality gate get promoted. Notes that don't stay where they are until they're ready.
Requires episodic-memory for cross-session conversation search:
claude plugin install episodic-memory@superpowers-marketplace
Then install learning-loop:
/plugin marketplace add robinslange/learning-loop
/plugin install learning-loop@learning-loop-marketplace
Restart Claude Code, then run /learning-loop:init to configure your vault path and persona voice.
This plugin is heavy. It runs local model inference and injects vault context into every session.
Tokens: Every session gets a context injection with your memory index, recent captures, and active intentions. A fresh vault adds almost nothing. A mature vault adds thousands of tokens per session, and grows. Skills like /discovery and /gaps spawn multiple parallel agents, each with its own context window.
Local compute: The ll-search binary (~77MB) bundles two quantized models (BGE-small-en-v1.5 for embeddings, ms-marco-MiniLM for reranking) and runs inference on your machine. On an M4 Max, reranked search takes ~0.6s and indexing ~1.8s. An Apple Silicon Mac with 16GB+ RAM is the practical minimum.
What we do to keep costs down: