From claude-memory
Manages structured project memories. Provides format instructions for writing and updating memory files in .claude/memory/.
npx claudepluginhub idnotbe/claude-memoryThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Builds 3-5 year financial models for startups with cohort revenue projections, cost structures, cash flow, headcount plans, burn rate, runway, and scenario analysis.
Structured memory stored in .claude/memory/. When instructed to save a memory, follow the steps below.
Plugin self-check: Before running any memory operations, verify plugin scripts are accessible by confirming
"${CLAUDE_PLUGIN_ROOT}/hooks/scripts/memory_candidate.py"exists. IfCLAUDE_PLUGIN_ROOTis unset or the file is missing, stop and report the error.
Architecture version check: Read
memory-config.jsonkeyarchitecture.simplified_flow. If explicitly set tofalse, fall back to the v5 orchestration flow inSKILL.md.v5. IfSKILL.md.v5is not found, report the error: "Cannot fall back to v5 flow: SKILL.md.v5 is missing. Set architecture.simplified_flow to true in memory-config.json or restore SKILL.md.v5." and stop. The instructions below assumesimplified_flow: true(default).
| Category | Folder | What It Captures |
|---|---|---|
| session_summary | sessions/ | Work resume snapshot |
| decision | decisions/ | Choice + rationale (why X over Y) |
| runbook | runbooks/ | Error fix procedure (diagnose, fix, verify) |
| constraint | constraints/ | Known limitations (enduring walls) |
| tech_debt | tech-debt/ | Deferred work (what was skipped and why) |
| preference | preferences/ | Conventions (how things should be done) |
Each category has a configurable description field in memory-config.json (under categories.<name>.description). Descriptions are included in triage context files and retrieval output to help classify content accurately.
Staging directory: Memory staging files are stored in <staging_base>/.claude-memory-staging-<hash>/ where <hash> is a deterministic SHA-256 prefix derived from UID:realpath(project_path). The staging base is resolved via a 4-tier priority in memory_staging_utils._resolve_staging_base():
XDG_RUNTIME_DIR -- if set, 0700, owned by euid, is a directory (rejects WSL2's 0777)/run/user/$UID -- Linux systemd fallback, same ownership/permission checksos.confstr("CS_DARWIN_USER_TEMP_DIR"), bypasses TMPDIR$XDG_CACHE_HOME/claude-memory/staging (or ~/.cache/claude-memory/staging) -- universal fallback, created with 0700No /tmp/ fallback -- this eliminates the /tmp/ symlink attack class. The triage-data.json file includes a staging_dir field with the exact resolved path. All staging file references below use <staging_dir> as shorthand.
When a triage hook fires with a save instruction:
Before parsing triage output, check for stale staging files from a previous failed session.
Only run this check when no <triage_data> or <triage_data_file> tag is present in the
current hook output (i.e., manual /memory:save invocation or recovery). If triage output IS
present, skip directly to SETUP -- the current triage data is fresh.
staging_dir field from triage-data.json if available, or compute it using memory_staging_utils.get_staging_dir() (path is <staging_base>/.claude-memory-staging-<hash>/ where <staging_base> is the 4-tier resolved base and <hash> is derived from UID:realpath(project_path)). Check if ANY of these exist:
<staging_dir>/.triage-pending.json<staging_dir>/triage-data.json WITHOUT a corresponding <staging_dir>/last-save-result.jsonpython3 "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/memory_write.py" --action cleanup-staging --staging-dir <staging_dir>
Pre-existing context files may be stale (unknown age, missing transcript). Always run fresh triage for accurate saves.
Step 1: Parse triage output (must run FIRST to obtain <staging_dir>).
<triage_data_file>...</triage_data_file> tags in the stop hook output. If present, read the JSON file at that path. The JSON includes a staging_dir field -- use this for all subsequent staging file paths.<triage_data> JSON block (backwards compatibility). If it lacks staging_dir, compute it from the project path.Step 2: Clean stale intent files. Remove leftover intent files from previous sessions (requires <staging_dir> from Step 1):
python3 "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/memory_write.py" --action cleanup-intents --staging-dir <staging_dir>
Step 3: Read config. Read memory-config.json for triage.parallel.category_models and triage.parallel.verification_enabled.
Categories are triggered by keyword heuristic scoring in memory_triage.py. Each category has primary keyword patterns and co-occurrence boosters. Thresholds are configurable via triage.thresholds.* in config (default range: 0.4-0.6). SESSION_SUMMARY uses activity metrics instead of text matching.
If triage.parallel.enabled is false, fall back to the sequential flow: process each category one at a time using the current model (no Agent subagents).
For EACH triggered category, spawn an Agent subagent using the memory-drafter agent file:
Agent(
subagent_type: "memory-drafter",
model: config.category_models[category.lower()] or default_model,
run_in_background: true,
prompt: "Category: <cat>\nContext file: <staging_dir>/context-<cat>.txt\nOutput: <staging_dir>/intent-<cat>.json"
)
The memory-drafter agent has tools: Read, Write only (no Bash), which structurally prevents Guardian conflicts. Each subagent reads its context file and writes an intent JSON file -- nothing more.
Important: The <triage_data> JSON block emits lowercase category names
(e.g., "decision"), matching config keys and memory_candidate.py expectations.
The human-readable stderr section may use UPPERCASE for readability, but always
use the lowercase category value from the JSON for model lookup, CLI calls,
and file operations.
Spawn ALL category subagents in PARALLEL (single message, multiple Agent calls).
Background subagents run concurrently. Do NOT proceed to Phase 1.5 or Phase 2 until every background subagent has returned a completion notification. For each completed subagent, verify it succeeded before reading its intent file. If a subagent failed, skip that category (log warning) and continue with remaining categories.
M1 Fallback: If ALL Phase 1 drafters fail (no intent-*.json files produced), write a pending file using the Write tool:
Write(
file_path: "<staging_dir>/.triage-pending.json",
content: '{"categories": ["all"], "reason": "total_drafter_failure", "timestamp": "<ISO 8601 UTC>"}'
)
Then stop -- do not proceed to Phase 2. The retrieval hook will detect this on the next session.
Context file format (<staging_dir>/context-<category>.txt):
Each context file contains a header with the category name and score, optionally
followed by a Description: line (from categories.<name>.description in config),
then a <transcript_data> block wrapping relevant transcript excerpts. For text-based
categories, these are keyword-matched snippets with surrounding context (+/- 10 lines).
For SESSION_SUMMARY, activity metrics (tool uses, distinct tools, exchanges) are provided,
followed by transcript excerpts: full transcript if short (<280 lines), or head (80 lines)
Subagent output: Each subagent writes one of two intent JSON types:
{ "category", "new_info_summary", "intended_action"?, "lifecycle_hints"?, "partial_content": { "title", "tags", "confidence", "related_files"?, "change_summary", "content" } }{ "category", "action": "noop", "noop_reason" }If context_file is missing from the triage entry for a category (can happen on
staging directory write failure), skip that category with a warning.
If a subagent fails or writes invalid JSON, skip that category (log warning) and continue.
Skip check: If triage.parallel.verification_enabled is false in config (default: false), skip Phase 1.5 entirely and proceed directly to Phase 2 COMMIT with --action run.
When verification IS enabled (verification_enabled: true):
Step 1: Prepare. Run the orchestrator in prepare mode:
python3 "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/memory_orchestrate.py" \
--staging-dir <staging_dir> --action prepare --memory-root <memory_root>
This runs steps 1-6 (collect intents, candidate selection, CUD resolution, draft assembly, manifesting) and writes <staging_dir>/orchestration-result.json.
Step 2: Identify risk-eligible categories. Read the manifest. Categories eligible for verification:
decision or constraint categoriesDELETE actionsIf no categories are eligible, skip to Step 4.
Step 3: Verify. For each eligible category, spawn a verification Agent subagent:
draft_path from the manifestPASS, BLOCK (hallucination/factual error), or REVISE (advisory)Spawn ALL verification subagents in PARALLEL. Collect verdicts.
Step 4: Commit. Build the exclude list from any BLOCK verdicts, then run:
python3 "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/memory_orchestrate.py" \
--staging-dir <staging_dir> --action commit --memory-root <memory_root> \
--exclude-categories <comma-separated-blocked-categories>
When verification is DISABLED (default):
python3 "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/memory_orchestrate.py" \
--staging-dir <staging_dir> --action run --memory-root <memory_root>
This single command handles everything:
memory_candidate.py) per category + captures OCC hashesmemory_draft.py) per category, generates target pathsmemory_write.py calls (with --skip-auto-enforce), enforcement, result file, cleanupIf ALL categories are NOOP (manifest status is "all_noop"): the script exits cleanly. No saves performed.
On partial failure: .triage-pending.json is written with failed categories. Staging is preserved. Sentinel set to failed.
On full success: staging cleaned up, result file written, sentinel set to saved.
On orchestrator crash (non-zero exit): Do NOT retry the orchestrator. Report the error to the user with the stderr output and stop. The orchestrator's own exception handler writes .triage-pending.json and sets sentinel to failed, so recovery will happen on next triage. Retrying risks duplicate saves or data corruption.
Final output rule: After Phase 2 completes, output ONLY the single-line save summary (e.g., "Saved: session_summary (create), decision (update)"). No intermediate status, phase completion messages, or additional commentary.
memory_write.py enforces these protections automatically:
ANTI_RESURRECTION_ERROR. Use a different title/slug, wait 24 hours, or restore the old memory and update it.created_at, schema_version, category cannot changerecord_status cannot be changed via UPDATE (use retire/archive actions)related_files: grow-only, except non-existent (dangling) paths can be removedchanges[]: append-only; at least 1 new change entry required per updatechanges[] is capped at 50 entries (oldest dropped)--hash checks the file's current MD5 against the expected hash. Mismatches produce OCC_CONFLICT (re-read and retry).| L1 (Python) | L2 (Subagent) | Resolution | Rationale |
|---|---|---|---|
| CREATE | CREATE | CREATE | Agreement |
| UPDATE_OR_DELETE | UPDATE | UPDATE | Agreement |
| UPDATE_OR_DELETE | DELETE | DELETE | Structural permits |
| CREATE | UPDATE | CREATE | Structural: no candidate exists |
| CREATE | DELETE | NOOP | Cannot DELETE with 0 candidates |
| UPDATE_OR_DELETE | CREATE | CREATE | Subagent says new despite candidate |
| VETO | * | OBEY VETO | Mechanical invariant |
| NOOP | * | NOOP | No target |
This table is implemented in memory_orchestrate.py. It is documented here for reference.
See action-plans/_ref/MEMORY-CONSOLIDATION-PROPOSAL.md for the original 3-layer design.
Key principles:
Common fields (all categories):
{ schema_version: "1.0", category, id (=slug), title (max 120 chars),
created_at (ISO 8601 UTC), updated_at, tags[] (min 1),
related_files[], confidence (0.0-1.0),
record_status: "active"|"retired"|"archived" (default: "active"),
changes: [{ date, summary, field?, old_value?, new_value? }] (max 50),
times_updated: integer (default: 0),
retired_at?, retired_reason?, archived_at?, archived_reason?,
content: {...} }
record_status (top-level system lifecycle):
| Status | Behavior |
|---|---|
| active | Indexed and retrievable (default for all new memories) |
| retired | Excluded from index; GC-eligible after 30-day grace period |
| archived | Excluded from index; NOT GC-eligible (preserved indefinitely) |
This is separate from content.status which tracks category-specific state (e.g., decision: proposed/accepted/deprecated/superseded).
Content by category:
{ goal, outcome: "success|partial|blocked|abandoned", completed[], in_progress[], blockers[], next_actions[], key_changes[] }{ status: "proposed|accepted|deprecated|superseded", context, decision, alternatives: [{option, rejected_reason}], rationale[], consequences[] }{ trigger, symptoms[], steps[], verification, root_cause, environment }{ kind: "limitation|gap|policy|technical", rule, impact[], workarounds[], severity: "high|medium|low", active: true, expires: "condition or 'none'" }{ status: "open|in_progress|resolved|wont_fix", priority: "critical|high|medium|low", description, reason_deferred, impact[], suggested_fix[], acceptance_criteria[] }{ topic, value, reason, strength: "strong|default|soft", examples: { prefer[], avoid[] } }Full JSON Schema definitions are in the plugin's assets/schemas/ directory.
Session summaries use a rolling window strategy: keep the last N sessions (default 5, configurable via categories.session_summary.max_retained in memory-config.json), retire the oldest when the limit is exceeded.
The rolling window is enforced AFTER a new session summary is successfully created:
sessions/ folder, count only files with record_status == "active" (or field absent for pre-v4 files).max_retained (default 5), identify the oldest session by created_at timestamp.memory_enforce.py. The script acquires the index lock, scans for active sessions, and retires excess sessions in a single atomic operation. The retirement reason is "Session rolling window: exceeded max_retained limit".In memory-config.json:
{
"categories": {
"session_summary": {
"max_retained": 5
}
}
}
Users can also manage sessions directly:
/memory --retire <slug> -- manually retire a specific session/memory --gc -- garbage collect retired sessions past the 30-day grace period/memory --restore <slug> -- restore a retired session to active status<<), Python interpreter, and .claude path in a single Bash command. All staging file content must be written via Write tool (not Bash). Bash is only for running python3 scripts. Do NOT use python3 -c for any file operations (read, write, delete, glob). Use dedicated scripts instead. Do NOT use find -delete or rm with .claude paths (use Python glob+os.remove instead). Do NOT pass inline JSON containing .claude paths on the Bash command line (use --result-file with a staging temp file instead)..claude/memory/memory-config.json (all defaults apply if absent):
architecture.simplified_flow -- enable simplified 3-phase flow (default: true). When false, falls back to SKILL.md.v5 5-phase flow.categories.<name>.enabled -- enable/disable category (default: true)categories.<name>.description -- plain-text category description for LLM classification context (default: see memory-config.default.json)categories.<name>.auto_capture -- enable/disable auto-capture (default: true)categories.<name>.retention_days -- auto-expire after N days (0 = permanent; 90 for sessions)categories.session_summary.max_retained -- max session summaries to keep (default: 5)retrieval.max_inject -- max memories injected per prompt (default: 3)max_memories_per_category -- max files per folder (default: 100)triage.parallel.enabled -- enable parallel subagent drafting (default: true)triage.parallel.category_models -- per-category model for drafting (see default config for per-category defaults; fallback: haiku)triage.parallel.verification_enabled -- enable/disable Phase 1.5 content verification (default: false)triage.parallel.verification_model -- model for verification phase (default: sonnet)triage.parallel.default_model -- fallback model if category not in map (default: haiku)delete.grace_period_days -- days before retired records are purged (default: 30)delete.archive_retired -- whether to archive instead of purge (default: true; agent-interpreted, not script-enforced)