Skill

Vector Memory Usage

Install
1
Install the plugin
$
npx claudepluginhub aeriondyseti/cc-plugins --plugin vector-memory

Want just this skill?

Add to a custom plugin, then install with one command.

Description

This skill should be used when the user asks to "store a memory", "remember this", "search memories", "what did we decide", "find relevant context", "update a memory", "delete a memory", "that memory was useful", discusses "memory quality", "memory best practices", "proactive memory search", or when guidance is needed on when and how to use the vector memory system effectively. Provides patterns for storing, searching, and leveraging semantic memories across sessions.

Tool Access

This skill uses the workspace's default tool permissions.

Skill Content

Vector Memory Usage

The vector memory system provides semantic, project-scoped memory storage. Memories persist across sessions and are retrieved via semantic search — meaning queries find relevant memories by meaning, not just keyword matching.

Database Storage and Version Control

The vector-memory MCP server stores its database as a local file (.vector-memory/memory.lance) inside the project directory. This database should be committed to version control by default. Committing the database ensures:

  • Portability — cloning the repo includes all accumulated project context, so new sessions (or new machines) start with full memory intact
  • Collaboration — teammates benefit from shared architectural decisions, known blockers, and implementation insights
  • Durability — the database is backed up alongside the code it describes, preventing accidental loss

The database is a compact binary format (LanceDB) that diffs and merges reasonably well in Git. If a project has sensitive memories that should not be committed, add .vector-memory/ to .gitignore on a per-project basis — but the default expectation is to commit it.

When to Proactively Search Memories

Search memories BEFORE making decisions or assumptions. The cost of an unnecessary search is low; the cost of missing relevant context is high.

Mandatory Search Triggers

  • Architectural decisions — before choosing a library, pattern, or approach, search for prior decisions on the same topic
  • Debugging unfamiliar code — search for implementation notes, known issues, or past resolutions
  • Starting a new task — search for relevant context, prior attempts, or related decisions
  • Referential ambiguity — when the user says "the project", "that bug", "last time", "as we discussed", search to resolve the reference
  • Repeated patterns — when implementing something similar to past work, search for the established pattern

Recommended Search Triggers

  • Before suggesting solutions — check if the problem was solved before
  • When encountering unfamiliar conventions — search for project-specific patterns or standards
  • Code review context — search for why code was written a certain way before suggesting changes
  • Configuration questions — search for prior setup decisions and rationale

Writing Effective Search Queries

Use Natural Language with Keywords

Good queries combine intent with specific terms:

ScenarioQueryIntent
Resuming work"authentication system architecture"continuity
Checking a decision"database choice PostgreSQL vs SQLite"fact_check
Finding patterns"error handling patterns API endpoints"frequent
Exploring connections"performance optimization caching"associative
Creative exploration"alternative approaches to state management"explore

Search Intents

Call mcp__vector-memory__search_memories with the appropriate intent:

  • continuity — resuming work, finding recent context (favors recency)
  • fact_check — verifying decisions or specifications (favors relevance)
  • frequent — finding common patterns or preferences (favors utility)
  • associative — brainstorming, finding connections (high relevance + variety)
  • explore — stuck or creative mode (balanced + diverse results)

Always Provide a Reason

The reason_for_search field forces intentional retrieval. Be specific:

  • "Checking if there's a prior decision on auth approach before suggesting JWT"
  • "Looking for known issues with the payment module before debugging"

Storing High-Quality Memories

One Concept Per Memory

Each memory should be self-contained and capture exactly one idea:

Good:

"Chose libSQL over PostgreSQL for the Resonance project because
of native vector support and simpler single-file deployment for local-first
architecture."

Bad:

"Uses SQLite"

The good example includes: what was decided, for which project, and why. The bad example lacks context, subject, and reasoning.

Memory Content Rules

  • 1-3 sentences (20-75 words) per memory
  • Self-contained — use explicit subjects, never "it", "this", "the project"
  • Include dates/versions when relevant
  • Be concrete — specific file paths, tool names, version numbers

Using embedding_text for Long Content

When memory content exceeds ~1,000 characters, provide an embedding_text field with a concise searchable summary. The embedding is generated from embedding_text instead of the full content, ensuring the memory remains discoverable:

{
  "content": "[detailed multi-paragraph implementation notes...]",
  "embedding_text": "Authentication middleware implementation using JWT with RS256 signing and refresh token rotation",
  "metadata": { "type": "implementation" }
}

What to Store

Call mcp__vector-memory__store_memories with appropriate metadata type tags:

TypeStoreExample
decisionWhat was chosen + why"Chose Drizzle ORM over Prisma for type safety and SQL-like syntax"
implementationWhat was built + where + patterns"Auth middleware in src/middleware/auth.ts uses JWT with RS256 signing"
insightLearning + why it matters"LanceDB requires schema migration when adding vector columns"
blockerProblem + resolution"CORS errors resolved by adding origin whitelist in server config"
next-stepTODO + suggested approach"Add rate limiting to API; consider express-rate-limit middleware"
contextBackground info + constraints"Project targets Node 20+ only; can use native fetch and crypto"

What NOT to Store

  • Machine-specific paths or local environment details
  • Ephemeral states ("tests are currently failing")
  • Information easily discoverable from code
  • Pleasantries or conversational filler
  • Duplicate information already in existing memories

Updating and Deleting Memories

When to Update

Call mcp__vector-memory__update_memories when a memory's content is still conceptually valid but needs correction or refinement:

  • A decision's rationale needs clarification
  • An implementation detail changed (new file path, different approach)
  • A version number or date needs updating
  • The embedding_text should be improved for better search discoverability

Updating preserves the memory ID, so any checkpoint references to it remain valid.

When to Delete

Call mcp__vector-memory__delete_memories when a memory is no longer relevant:

  • A decision was reversed entirely
  • A feature was removed from the codebase
  • Information is outdated and misleading
  • A duplicate was accidentally created

Deletion is a soft-delete — the memory can be recovered by searching with include_deleted: true. This means it is safe to delete aggressively when memories become stale.

Rule of thumb: If the memory needs minor corrections, update it. If it no longer reflects reality, delete it.

Memory Usefulness Feedback

Call mcp__vector-memory__report_memory_usefulness after retrieving memories to indicate whether they were helpful. This feedback loop is important for search quality:

  • Report useful when a memory directly informed a decision, resolved ambiguity, or saved time
  • Report not useful when a memory was irrelevant to the query, outdated, or misleading
  • Reporting consistently helps the system learn which memory patterns provide value
  • Skipping reports means the system cannot improve its ranking over time

Tools Reference

All tools use the mcp__vector-memory__ prefix:

ToolPurpose
mcp__vector-memory__search_memoriesSemantic search with intent-based ranking
mcp__vector-memory__store_memoriesPersist new memories (batch supported)
mcp__vector-memory__get_memoriesRetrieve specific memories by ID
mcp__vector-memory__update_memoriesModify existing memories in place
mcp__vector-memory__delete_memoriesSoft-delete outdated memories (recoverable)
mcp__vector-memory__report_memory_usefulnessFeedback on memory quality

For session-level snapshots, see the checkpoint-workflow skill which covers mcp__vector-memory__store_checkpoint and mcp__vector-memory__get_checkpoint.

Stats
Stars1
Forks0
Last CommitFeb 22, 2026
Actions

Similar Skills

cache-components

Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.

138.4k