From rune
Captures decisions, patterns, errors, and insights with semantic links via Neural Memory MCP. Enables cross-project recall, hypothesis tracking, and evidence-based reasoning.
npx claudepluginhub rune-kit/rune --plugin @rune/analyticsThis skill uses the workspace's default tool permissions.
Bridges Rune's file-based persistence (session-bridge, journal) with Neural Memory MCP's semantic graph. While session-bridge saves decisions to `.rune/` files and journal tracks ADRs locally, neural-memory captures **cross-project learnable patterns** — decisions, error root causes, architectural insights, and workflow preferences — into a persistent cognitive layer that compounds across every...
Persists learnings into a 5-layer memory hierarchy (CLAUDE.md files, memory/MEMORY.md) and consolidates by pruning outdated entries and promoting recurring patterns. Triggers on 'extract learnings', 'remember', 'dream'.
Maintains project context across Claude Code sessions via CONTINUITY.md. Reads at turn start, updates at end; captures mistakes, learnings, decisions to avoid repeating errors.
Proactively saves decisions, conventions, bugs, discoveries, and preferences to persistent Engram memory across sessions using mem_save and related tools.
Share bugs, ideas, or general feedback.
Bridges Rune's file-based persistence (session-bridge, journal) with Neural Memory MCP's semantic graph. While session-bridge saves decisions to .rune/ files and journal tracks ADRs locally, neural-memory captures cross-project learnable patterns — decisions, error root causes, architectural insights, and workflow preferences — into a persistent cognitive layer that compounds across every project and session.
Without this skill, each project is an island. With it, a caching pattern discovered in Project A auto-surfaces when Project B faces a similar problem.
Auto-trigger:
cook completes a feature → Run Capture Mode (save learnings)debug finds root cause → Run Capture Mode (save error pattern)review finds issues → Run Capture Mode (save code quality insight)rescue completes a phase → Run Capture Mode (save refactoring pattern)journal writes an ADR → Run Capture Mode (extract to nmem)Manual trigger:
/rune recall <topic> — search neural memory for a topic/rune remember <text> — save a specific memory/rune brain-health — check neural memory health + maintenance/rune hypothesize <question> — start hypothesis trackingsession-bridge (L3): after Capture Mode — sync key decisions back to .rune/ filescook (L1): Phase 0 (resume) + Phase 8 (complete) — recall context at start, capture learnings at endrescue (L1): phase start + phase end — recall past refactoring patterns, capture new onesdebug (L2): after root cause found — capture error pattern for future recognitionfix (L2): after fix verified — capture fix pattern (cause → solution)review (L2): after review complete — capture code quality insightplan (L2): before architecture decisions — recall past decisions on similar problemssentinel (L2): after security finding — capture vulnerability patternincident (L2): after resolution — capture incident root cause + fixretro (L2): during retrospective — capture retro insights and patternssession-bridge (L3): Step 6 (cross-project extraction) — extract generalizable patternsjournal (L3): after ADR written — extract decision + rejected alternativescontext-engine (L3): before compaction — trigger Flush Mode to preserve contextLoad relevant context from neural memory before starting work.
Step 1 — Identify Recall Topics
Read .rune/progress.md and current task context to determine 3-5 diverse recall topics.
Always prefix queries with the project name to avoid cross-project noise.
GOOD: "Rune compiler cross-reference resolution"
GOOD: "MyTrend PocketBase auth session handling"
BAD: "cross-reference" (too generic, returns all projects)
BAD: "auth" (returns noise from every project)
Step 2 — Execute Recall
Call nmem_recall for each topic. Use diverse angles:
"<project> React state management""<project> caching strategy decision""<project> error handling approach"Step 3 — Synthesize Context Summarize recalled memories into actionable context:
Step 4 — Surface Gaps
If recall returns thin results for the current domain, note the gap.
Call nmem_gaps(action="detect") if working in a domain with sparse memories.
Extract learnable patterns from completed work and save to neural memory.
Step 1 — Classify What Happened Determine which memory types to create from the completed task:
| What happened | Memory type | Priority | Example |
|---|---|---|---|
| Chose approach A over B | decision | 7 | "Chose Zustand over Redux because single-store simpler for this scale" |
| Found and fixed a bug | error | 7 | "Root cause was stale closure in useEffect — fixed by adding dep array" |
| Discovered a reusable pattern | insight | 6 | "This codebase uses barrel exports for every feature module" |
| Learned user preference | preference | 8 | "User prefers Phosphor Icons over Lucide for all UI work" |
| Established a workflow | workflow | 6 | "Deploy: build → test → push → verify CI → tag" |
| Found a fact worth keeping | fact | 5 | "API rate limit is 100 req/min on free tier" |
| Received instruction to follow | instruction | 8 | "Always run prettier before commit in this project" |
Step 2 — Craft Rich Memories Each memory MUST use cognitive language patterns for strong neural connections:
BAD: "PostgreSQL" (flat, no context — orphan neuron)
GOOD: "Chose PostgreSQL over MongoDB because ACID needed for payment processing"
BAD: "Fixed auth bug" (no root cause — useless for future recall)
GOOD: "Auth cookie expired silently because SameSite=Lax blocked cross-origin. Fixed by setting SameSite=None + Secure flag"
BAD: "React project structure" (vague — won't match specific queries)
GOOD: "Rune compiler uses 3-stage pipeline: Parse SKILL.md → Transform cross-refs → Emit per-platform files"
Cognitive patterns to use:
Step 3 — Tag and Prioritize Every memory MUST include:
[project-name, technology, topic] — lowercase, specificStep 4 — Save Memories
Call nmem_remember for each memory. Save 2-5 memories per completed task:
Step 5 — Reinforce Connections
After saving, call nmem_recall on the topic to reinforce new neural connections.
This activates related neurons and strengthens the memory graph.
Track uncertain decisions with evidence over time.
Step 1 — Form Hypothesis When making an uncertain architectural or design decision:
nmem_hypothesize("Redis will handle our session load better than Memcached
because our access pattern is 80% reads with complex data types")
Step 2 — Collect Evidence As you work, update the hypothesis with evidence:
nmem_evidence(hypothesis_id, "Redis handled 10K concurrent sessions with
p99 < 5ms in load test — SUPPORTS hypothesis")
nmem_evidence(hypothesis_id, "Memory usage 2x higher than Memcached estimate
— WEAKENS hypothesis for memory-constrained deployments")
Step 3 — Make Predictions Create falsifiable predictions:
nmem_predict("If we switch to Redis Cluster, session failover time will drop
from 30s to < 2s")
Step 4 — Verify Outcomes After deployment/testing, verify:
nmem_verify(prediction_id, outcome="Failover time dropped to 1.2s — CONFIRMED")
Capture remaining context before session ends.
Step 1 — Scan Unsaved Context Review the current session for:
Step 2 — Batch Save
Call nmem_auto(action="process", text="<session summary>") with a concise summary
of the session's key outcomes, decisions, and learnings.
Step 3 — Update Session Bridge
If significant decisions were captured, also call session-bridge to sync
the most important ones to .rune/decisions.md for local persistence.
Keep the neural memory healthy and useful.
Step 1 — Health Check
Call nmem_health() to assess brain status. Key metrics:
Step 2 — Consolidation If brain has >100 memories or consolidation is low:
nmem_consolidate — merge episodic → semantic memories
Step 3 — Review Queue
Call nmem_review(action="queue") to surface memories needing attention:
Step 4 — Corrections Fix bad memories:
nmem_edit(memory_id, type="correct_type")nmem_edit(memory_id, content="corrected text")nmem_forget(memory_id, reason="outdated")nmem_forget(memory_id, hard=true)Step 5 — Connection Tracing
Use nmem_explain(entity_a, entity_b) to trace paths between concepts.
Useful for understanding why certain memories surface together.
## Neural Memory Recall — <project>
### Loaded Context
- <memory 1 summary — decision/pattern/insight>
- <memory 2 summary>
- <memory 3 summary>
### Applicable to Current Task
- <how memory X applies>
- <how memory Y applies>
### Gaps Detected
- <domain with sparse coverage>
## Neural Memory Capture — <task summary>
### Saved Memories
| # | Type | Priority | Tags | Content (preview) |
|---|------|----------|------|--------------------|
| 1 | decision | 7 | [project, tech, topic] | Chose X over Y because... |
| 2 | error | 7 | [project, bug, tech] | Root cause was X... |
| 3 | insight | 6 | [project, pattern] | This codebase uses... |
### Reinforced Topics
- <topic recalled to strengthen connections>
## Neural Memory Health
| Metric | Value | Status |
|--------|-------|--------|
| Total memories | N | — |
| Consolidation | N% | ✅ / ⚠️ |
| Orphans | N% | ✅ / ⚠️ |
| Activation | level | ✅ / ⚠️ |
| Top penalty | <metric> | Fix: <action> |
### Recommended Actions
1. <action with command>
.rune/ files (session-bridge) or git. nmem is for learnable patterns only.[project-name, technology, topic]. Tags enable future recall precision.| Failure Mode | Severity | Mitigation |
|---|---|---|
| Cross-project noise from generic queries | HIGH | Always prefix queries with project name. Use nmem_explain to trace unexpected connections |
| Orphan neurons from flat facts | HIGH | Enforce cognitive language patterns (causal, decisional, comparative). Run nmem_health to detect orphan % |
| Memory bloat from over-saving | MEDIUM | Cap at 5 memories per task. Run nmem_consolidate weekly. Use nmem_review to prune |
| Stale decisions applied to changed codebase | MEDIUM | Include temporal context ("As of v2.1, ..."). Verify recalled decisions against current code before applying |
| Duplicate memories from repeated sessions | MEDIUM | Before saving, nmem_recall the topic first to check for existing memories. Update rather than create duplicates |
| Loss of nuance from oversimplification | LOW | Save rejected alternatives alongside chosen approach. Use nmem_hypothesize for uncertain decisions |
Recall Mode:
Capture Mode:
[project, technology, topic]Flush Mode:
nmem_auto called with session summaryMaintenance Mode:
nmem_health run and metrics assessed