Mem0 memory layer integration for AI agents. Implement persistent, semantic memory for long-term context retention and personalization.
Enables persistent, semantic memory storage and retrieval for AI agents across conversations.
npx claudepluginhub a5c-ai/babysitterThis skill is limited to using the following tools:
README.mdIntegrate Mem0 (formerly MemGPT) as a universal memory layer for AI agents. Enable persistent memory storage, semantic search across memories, and personalized context retrieval.
Mem0 provides intelligent memory management for AI applications:
from mem0 import Memory
# Initialize with default local storage
m = Memory()
# Or with custom configuration
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"host": "localhost",
"port": 6333,
}
},
"llm": {
"provider": "openai",
"config": {
"model": "gpt-4o-mini",
"temperature": 0.1,
}
}
}
m = Memory.from_config(config)
# Add memory from conversation
messages = [
{"role": "user", "content": "I prefer dark mode for all my applications"},
{"role": "assistant", "content": "I'll remember that you prefer dark mode."}
]
m.add(messages, user_id="user123")
# Add memory from plain text
m.add("User works at Acme Corp as a software engineer", user_id="user123")
# Add with metadata
m.add(
"Prefers Python over JavaScript",
user_id="user123",
metadata={"category": "preferences", "confidence": 0.9}
)
# Search for relevant memories
results = m.search(
query="What are the user's preferences?",
user_id="user123",
limit=5
)
for memory in results:
print(f"Memory: {memory['memory']}")
print(f"Relevance: {memory['score']}")
print(f"Created: {memory['created_at']}")
# Get all memories for a user
all_memories = m.get_all(user_id="user123")
# Filter by metadata
filtered = m.get_all(
user_id="user123",
metadata={"category": "preferences"}
)
# Get memory changes over time
history = m.history(memory_id="mem_abc123")
for entry in history:
print(f"Version: {entry['version']}")
print(f"Content: {entry['memory']}")
print(f"Updated: {entry['updated_at']}")
from langchain_openai import ChatOpenAI
from mem0 import MemoryClient
# Initialize Mem0 client
mem0_client = MemoryClient(api_key="your-api-key")
# Create LLM with memory-enhanced context
llm = ChatOpenAI(model="gpt-4")
def chat_with_memory(user_message: str, user_id: str) -> str:
# Retrieve relevant memories
memories = mem0_client.search(user_message, user_id=user_id, limit=5)
memory_context = "\n".join([m["memory"] for m in memories])
# Build prompt with memory context
system_prompt = f"""You are a helpful assistant.
Here is what you remember about this user:
{memory_context}
Use this context to personalize your response."""
# Generate response
response = llm.invoke([
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
])
# Store new memory from conversation
mem0_client.add(
[
{"role": "user", "content": user_message},
{"role": "assistant", "content": response.content}
],
user_id=user_id
)
return response.content
# Using Mem0 MCP server with Claude
# Configure in claude_desktop_config.json:
{
"mcpServers": {
"mem0": {
"command": "npx",
"args": ["-y", "@mem0/mcp-server"]
}
}
}
const mem0IntegrationTask = defineTask({
name: 'mem0-integration-setup',
description: 'Configure Mem0 memory layer for AI agent',
inputs: {
storageBackend: { type: 'string', default: 'local' }, // 'local', 'qdrant', 'postgres', 'cloud'
vectorDimension: { type: 'number', default: 1536 },
embeddingModel: { type: 'string', default: 'text-embedding-3-small' },
memoryCategories: { type: 'array', default: ['facts', 'preferences', 'conversations'] },
userIsolation: { type: 'boolean', default: true }
},
outputs: {
configured: { type: 'boolean' },
memoryStats: { type: 'object' },
artifacts: { type: 'array' }
},
async run(inputs, taskCtx) {
return {
kind: 'skill',
title: `Configure Mem0 with ${inputs.storageBackend} backend`,
skill: {
name: 'mem0-integration',
context: {
storageBackend: inputs.storageBackend,
vectorDimension: inputs.vectorDimension,
embeddingModel: inputs.embeddingModel,
memoryCategories: inputs.memoryCategories,
userIsolation: inputs.userIsolation,
instructions: [
'Validate storage backend availability',
'Configure embedding model and vector dimensions',
'Set up memory categories and metadata schemas',
'Implement user isolation if enabled',
'Create memory add/search/retrieve functions',
'Test memory operations with sample data',
'Document integration patterns for the application'
]
}
},
io: {
inputJsonPath: `tasks/${taskCtx.effectId}/input.json`,
outputJsonPath: `tasks/${taskCtx.effectId}/result.json`
}
};
}
});
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user wants to "create a skill", "add a skill to plugin", "write a new skill", "improve skill description", "organize skill content", or needs guidance on skill structure, progressive disclosure, or skill development best practices for Claude Code plugins.