Index project documentation, codebases, and knowledge graphs for hybrid retrieval: BM25 keywords, semantic similarity, GraphRAG relationships, or fused multi-mode search. Retrieve cited chunks with scores to research dependencies, errors, and concepts in seconds using Ollama, OpenAI, or Anthropic.
npx claudepluginhub spillwavesolutions/agent-brain --plugin agent-brainSearch using BM25 keyword matching for exact terms
View embedding cache metrics or clear the cache
12-step wizard to configure all Agent Brain settings — providers, storage, GraphRAG, reranking, caching, file watcher, chunking, and server deployment
Configure the embedding provider for vector search
Manage indexed folders — list, add, or remove
Search using GraphRAG for relationship and dependency queries
Show available Agent Brain commands and usage
Search using hybrid BM25 + semantic with alpha tuning
Index documents for semantic search
Initialize Agent Brain for the current project
Inject custom metadata into chunks during indexing via Python scripts or JSON metadata files
Install Agent Brain plugin for a specific runtime (Claude, OpenCode, Gemini)
Install Agent Brain packages using pipx, uv, pip, or conda
Monitor and manage async indexing jobs in the queue
Search using BM25 keyword matching for exact terms
List all running Agent Brain instances across projects
Search using multi-mode fusion combining all search modes
List and configure embedding and summarization providers
Clear the document index (requires confirmation)
Search indexed documentation using hybrid BM25+semantic retrieval
Search using semantic vector similarity for conceptual queries
Complete guided setup for Agent Brain (install, config, init, verify)
Start the Agent Brain server for this project
Show Agent Brain server status (health, documents, cache, watcher)
Stop the Agent Brain server for this project
Configure the summarization provider for code summaries
List available file type presets for indexing
Search using semantic vector similarity for concepts
Verify Agent Brain installation and configuration
Show current version and manage Agent Brain versions
Intelligent research agent that uses Agent Brain for knowledge retrieval with adaptive search modes
Proactively assists with document and code search using Agent Brain
Proactively assists with Agent Brain installation and configuration
Installation and configuration skill for Agent Brain document search system. Use when asked to "install agent brain", "setup agent brain", "configure agent brain", "setting up document search", "installing agent-brain packages", "configuring API keys", "initializing project for search", "troubleshooting agent brain", "pip install agent-brain", "agent brain not working", "agent brain setup error", "configure embeddings provider", "setup ollama for agent brain", or "agent brain environment variables". Covers package installation, provider configuration, project initialization, and server management.
Expert Agent Brain skill for document search with BM25 keyword, semantic vector, hybrid, graph, and multi retrieval modes. Use when asked to "search documentation", "query domain", "find in docs", "bm25 search", "hybrid search", "semantic search", "graph search", "multi search", "find dependencies", "code relationships", "searching knowledge base", "querying indexed documents", "finding code references", "exploring codebase", "what calls this function", "find imports", "trace dependencies", "brain search", "brain query", "knowledge base search", "cache management", "clear embedding cache", "cache hit rate", or "cache status". Supports multi-instance architecture with automatic server discovery. GraphRAG mode enables relationship-aware queries for code dependencies and entity connections. Pluggable providers for embeddings (OpenAI, Cohere, Ollama) and summarization (Anthropic, OpenAI, Gemini, Grok, Ollama). Supports multiple runtimes (Claude Code, OpenCode, Gemini CLI) with shared .agent-brain/ data directory.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Uses power tools
Uses Bash, Write, or Edit tools
AI-powered wiki generator for code repositories. Generates comprehensive, Mermaid-rich documentation with dark-mode VitePress sites, onboarding guides, deep research, and source citations. Inspired by OpenDeepWiki and deepwiki-open.
Claude + Obsidian knowledge companion. Sets up a persistent, compounding wiki vault. Covers memory management, session notetaking, knowledge organization, and agent context across projects. Based on Andrej Karpathy's LLM Wiki pattern. Optional DragonScale Memory extension adds hierarchical log folds, deterministic page addresses, embedding-based semantic tiling lint, and boundary-first autoresearch topic selection.
Complete developer workflow toolkit. Includes 34 reference skills, 34 specialized agents, and 21 slash commands covering TDD, debugging, code review, architecture, documentation, refactoring, security, testing, git workflows, API design, performance, UI/UX design, plugin development, and incident response. Full SDLC coverage with MCP integrations.
Comprehensive C4 architecture documentation workflow with bottom-up code analysis, component synthesis, container mapping, and context diagram generation