Chat with local Ollama models that can explore your codebase using tools.
Chat with local Ollama models that explore your codebase using tools. Use when you need AI assistance with local models for code analysis, git operations, or file exploration.
/plugin marketplace add IsmaelMartinez/local-brain/plugin install local-brain@local-brain-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Chat with local Ollama models that have tools to explore your codebase.
Install local-brain:
uv pip install local-brain
Or with pipx:
pipx install local-brain
Requirements:
ollama pull qwen3)local-brain "prompt" # Ask anything (auto-selects best model)
local-brain -v "prompt" # Show tool calls
local-brain -m qwen2.5:3b "prompt" # Specific model
local-brain --trace "prompt" # Enable OTEL tracing
local-brain --list-models # Show available models
local-brain --root /path/to/project "prompt" # Set project root
local-brain doctor # Check system health
Verify your setup is working correctly:
local-brain doctor
This checks:
Example output:
š Local Brain Health Check
Checking Ollama...
ā
Ollama is installed (ollama version is 0.13.1)
Checking Ollama server...
ā
Ollama server is running (9 models)
Checking recommended models...
ā
Recommended models installed: qwen3:latest
Checking tools...
ā
Tools working (9 tools available)
Checking optional features...
ā
OTEL tracing available (--trace flag)
========================================
ā
All checks passed! Local Brain is ready.
local-brain "What's in this repo?"
local-brain "Review the git changes"
local-brain "Generate a commit message"
local-brain "Explain how src/main.py works"
local-brain "Find all TODO comments"
local-brain "What functions are defined in utils.py?"
local-brain "Search for 'validate' in the auth module"
Local Brain automatically detects installed Ollama models and selects the best one for tool-calling tasks:
# See what models are available
local-brain --list-models
Recommended models (verified tool support):
qwen3:latest - General purpose, default choice (Tier 1)qwen2.5:3b - Resource-constrained environments (Tier 1)Avoid these models (broken or unreliable tool calling):
qwen2.5-coder:* - Broken with Smolagentsllama3.2:1b - Hallucinationsdeepseek-r1:* - No tool supportIf no model is specified, Local Brain auto-selects the best installed model.
Enable OpenTelemetry tracing with the --trace flag:
local-brain --trace "What files are here?"
This traces:
Install tracing dependencies:
pip install local-brain[tracing]
All file operations are restricted to the project root (path jailing):
.env, .pem, SSH keys) are blockedThe model assumes these tools are available and uses them directly:
read_file(path) - Read file contents at a given path. Large files are truncated (200 lines / 20K chars). Has 30s timeout. Restricted to project root.list_directory(path, pattern) - List files in path matching a glob pattern (e.g., *.py, src/**/*.js). Excludes hidden files and common ignored directories. Returns up to 100 files. Has 30s timeout.file_info(path) - Get file metadata (size, type, modified time) for a given path. Has 30s timeout.search_code(pattern, file_path, ignore_case) - AST-aware code search. Unlike simple grep, shows intelligent context around matches (function/class boundaries). Supports Python, JavaScript, TypeScript, Go, Rust, Ruby, Java, C/C++.list_definitions(file_path) - Extract class/function definitions from a source file. Shows signatures and docstrings without full implementation code. Great for understanding file structure quickly.git_diff(staged, file_path) - Show code changes. Use staged=True for staged changes. Optionally provide a file_path. Output is truncated.git_status() - Check repo status. Output is truncated.git_changed_files(staged, include_untracked) - List changed files. Use staged=True for staged files, include_untracked=True to include untracked files. Output is truncated.git_log(count) - View commit history. count specifies number of commits (max 50). Output is truncated.All tools return human-readable output or error messages on failure.