Implement semantic vector search with AgentDB for intelligent document retrieval, similarity matching, and context-aware querying. Use when building RAG systems, semantic search engines, or intelligent knowledge bases.
/plugin marketplace add DNYoussef/context-cascade/plugin install dnyoussef-context-cascade@DNYoussef/context-cascadeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
examples/example-1-rag-basic.mdexamples/example-2-hybrid-search.mdexamples/example-3-reranking.mdgraphviz/workflow.dotreadme.mdreferences/embedding-models.mdreferences/rag-patterns.mdresources/scripts/rag_pipeline.shresources/scripts/semantic_search.pyresources/scripts/similarity_match.pyresources/templates/embedding-model.jsonresources/templates/rag-config.yamlresources/templates/vector-index.yamltests/test-1-semantic-search.mdtests/test-2-rag-system.mdtests/test-3-knowledge-base.mdImplements vector-based semantic search using AgentDB's high-performance vector database with 150x-12,500x faster operations than traditional solutions. Features HNSW indexing, quantization, and sub-millisecond search (<100µs).
# Initialize with default dimensions (1536 for OpenAI ada-002)
npx agentdb@latest init ./vectors.db
# Custom dimensions for different embedding models
npx agentdb@latest init ./vectors.db --dimension 768 # sentence-transformers
npx agentdb@latest init ./vectors.db --dimension 384 # all-MiniLM-L6-v2
# Use preset configurations
npx agentdb@latest init ./vectors.db --preset small # <10K vectors
npx agentdb@latest init ./vectors.db --preset medium # 10K-100K vectors
npx agentdb@latest init ./vectors.db --preset large # >100K vectors
# In-memory database for testing
npx agentdb@latest init ./vectors.db --in-memory
# Basic similarity search
npx agentdb@latest query ./vectors.db "[0.1,0.2,0.3,...]"
# Top-k results
npx agentdb@latest query ./vectors.db "[0.1,0.2,0.3]" -k 10
# With similarity threshold (cosine similarity)
npx agentdb@latest query ./vectors.db "0.1 0.2 0.3" -t 0.75 -m cosine
# Different distance metrics
npx agentdb@latest query ./vectors.db "[...]" -m euclidean # L2 distance
npx agentdb@latest query ./vectors.db "[...]" -m dot # Dot product
# JSON output for automation
npx agentdb@latest query ./vectors.db "[...]" -f json -k 5
# Verbose output with distances
npx agentdb@latest query ./vectors.db "[...]" -v
# Export vectors to JSON
npx agentdb@latest export ./vectors.db ./backup.json
# Import vectors from JSON
npx agentdb@latest import ./backup.json
# Get database statistics
npx agentdb@latest stats ./vectors.db
import { createAgentDBAdapter, computeEmbedding } from 'agentic-flow/reasoningbank';
// Initialize with vector search optimizations
const adapter = await createAgentDBAdapter({
dbPath: '.agentdb/vectors.db',
enableLearning: false, // Vector search only
enableReasoning: true, // Enable semantic matching
quantizationType: 'binary', // 32x memory reduction
cacheSize: 1000, // Fast retrieval
});
// Store document with embedding
const text = "The quantum computer achieved 100 qubits";
const embedding = await computeEmbedding(text);
await adapter.insertPattern({
id: '',
type: 'document',
domain: 'technology',
pattern_data: JSON.stringify({
embedding,
text,
metadata: { category: "quantum", date: "2025-01-15" }
}),
confidence: 1.0,
usage_count: 0,
success_count: 0,
created_at: Date.now(),
last_used: Date.now(),
});
// Semantic search with MMR (Maximal Marginal Relevance)
const queryEmbedding = await computeEmbedding("quantum computing advances");
const results = await adapter.retrieveWithReasoning(queryEmbedding, {
domain: 'technology',
k: 10,
useMMR: true, // Diverse results
synthesizeContext: true, // Rich context
});
// Store with automatic embedding
await db.storeWithEmbedding({
content: "Your document text",
metadata: { source: "docs", page: 42 }
});
// Find similar documents
const similar = await db.findSimilar("quantum computing", {
limit: 5,
minScore: 0.75
});
// Combine vector similarity with metadata filtering
const results = await db.hybridSearch({
query: "machine learning models",
filters: {
category: "research",
date: { $gte: "2024-01-01" }
},
limit: 20
});
// Build RAG pipeline
async function ragQuery(question: string) {
// 1. Get relevant context
const context = await db.searchSimilar(
await embed(question),
{ limit: 5, threshold: 0.7 }
);
// 2. Generate answer with context
const prompt = `Context: ${context.map(c => c.text).join('\n')}
Question: ${question}`;
return await llm.generate(prompt);
}
// Efficient batch storage
await db.batchStore(documents.map(doc => ({
text: doc.content,
embedding: doc.vector,
metadata: doc.meta
})));
# Start AgentDB MCP server for Claude Code
npx agentdb@latest mcp
# Add to Claude Code (one-time setup)
claude mcp add agentdb npx agentdb@latest mcp
# Now use MCP tools in Claude Code:
# - agentdb_query: Semantic vector search
# - agentdb_store: Store documents with embeddings
# - agentdb_stats: Database statistics
# Run comprehensive benchmarks
npx agentdb@latest benchmark
# Results:
# ✅ Pattern Search: 150x faster (100µs vs 15ms)
# ✅ Batch Insert: 500x faster (2ms vs 1s for 100 vectors)
# ✅ Large-scale Query: 12,500x faster (8ms vs 100s at 1M vectors)
# ✅ Memory Efficiency: 4-32x reduction with quantization
AgentDB provides multiple quantization strategies for memory efficiency:
const adapter = await createAgentDBAdapter({
quantizationType: 'binary', // 768-dim → 96 bytes
});
const adapter = await createAgentDBAdapter({
quantizationType: 'scalar', // 768-dim → 768 bytes
});
const adapter = await createAgentDBAdapter({
quantizationType: 'product', // 768-dim → 48-96 bytes
});
# Cosine similarity (default, best for most use cases)
npx agentdb@latest query ./db.sqlite "[...]" -m cosine
# Euclidean distance (L2 norm)
npx agentdb@latest query ./db.sqlite "[...]" -m euclidean
# Dot product (for normalized vectors)
npx agentdb@latest query ./db.sqlite "[...]" -m dot
# Check if HNSW indexing is enabled (automatic)
npx agentdb@latest stats ./vectors.db
# Expected: <100µs search time
# Enable binary quantization (32x reduction)
# Use in adapter: quantizationType: 'binary'
# Adjust similarity threshold
npx agentdb@latest query ./db.sqlite "[...]" -t 0.8 # Higher threshold
# Or use MMR for diverse results
# Use in adapter: useMMR: true
# Check embedding model dimensions:
# - OpenAI ada-002: 1536
# - sentence-transformers: 768
# - all-MiniLM-L6-v2: 384
npx agentdb@latest init ./db.sqlite --dimension 768
# Get comprehensive stats
npx agentdb@latest stats ./vectors.db
# Shows:
# - Total patterns/vectors
# - Database size
# - Average confidence
# - Domains distribution
# - Index status
npx agentdb@latest mcp for Claude Codenpx agentdb@latest --helpnpx agentdb@latest help <command>Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.