From claudeclaw
Adds Ollama MCP server exposing local models as tools (list_models, generate) for Claude Code container agent to offload cheap/fast tasks like summarization, translation, queries.
npx claudepluginhub sbusso/claudeclawThis skill uses the workspace's default tool permissions.
This skill adds a stdio-based MCP server that exposes local Ollama models as tools for the container agent. Claude remains the orchestrator but can offload work to local models.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
This skill adds a stdio-based MCP server that exposes local Ollama models as tools for the container agent. Claude remains the orchestrator but can offload work to local models.
Tools added:
ollama_list_models — lists installed Ollama modelsollama_generate — sends a prompt to a specified model and returns the responseCheck if agent/runner/src/ollama-mcp-stdio.ts exists. If it does, skip to Phase 3 (Configure).
Verify Ollama is installed and running on the host:
ollama list
If Ollama is not installed, direct the user to https://ollama.com/download.
If no models are installed, suggest pulling one:
You need at least one model. I recommend:
ollama pull gemma3:1b # Small, fast (1GB) ollama pull llama3.2 # Good general purpose (2GB) ollama pull qwen3-coder:30b # Best for code tasks (18GB)
git remote -v
If upstream is missing, add it:
git remote add upstream https://github.com/sbusso/claudeclaw.git
git fetch upstream skill/ollama-tool
git merge upstream/skill/ollama-tool
This merges in:
agent/runner/src/ollama-mcp-stdio.ts (Ollama MCP server)scripts/ollama-watch.sh (macOS notification watcher)agent/runner/src/index.ts (allowedTools + mcpServers)[OLLAMA] log surfacing in src/orchestrator/container-runner.tsOLLAMA_HOST in .env.exampleIf the merge reports conflicts, resolve them by reading the conflicted files and understanding the intent of both sides.
Existing groups have a cached copy of the agent-runner source. Copy the new files:
for dir in data/sessions/*/agent-runner-src; do
cp agent/runner/src/ollama-mcp-stdio.ts "$dir/"
cp agent/runner/src/index.ts "$dir/"
done
npm run build
./src/runtimes/docker/build.sh
Build must be clean before proceeding.
By default, the MCP server connects to http://host.docker.internal:11434 (Docker Desktop) with a fallback to localhost. To use a custom Ollama host, add to .env:
OLLAMA_HOST=http://your-ollama-host:11434
Service name: Derived from the directory name:
com.claudeclaw.<dirname>(macOS) /claudeclaw-<dirname>(Linux). For example, if cwd ismy-assistant, the service iscom.claudeclaw.my-assistant. Determine the correct service name before running service commands below.
launchctl kickstart -k gui/$(id -u)/com.claudeclaw # macOS
# Linux: systemctl --user restart claudeclaw
Tell the user:
Send a message like: "use ollama to tell me the capital of France"
The agent should use
ollama_list_modelsto find available models, thenollama_generateto get a response.
Run the watcher script for macOS notifications when Ollama is used:
./scripts/ollama-watch.sh
tail -f logs/claudeclaw.log | grep -i ollama
Look for:
Agent output: ... Ollama ... — agent used Ollama successfully[OLLAMA] >>> Generating — generation started (if log surfacing works)[OLLAMA] <<< Done — generation completedThe agent is trying to run ollama CLI inside the container instead of using the MCP tools. This means:
agent/runner/src/index.ts has the ollama entry in mcpServers./src/runtimes/docker/build.shollama listdocker run --rm curlimages/curl curl -s http://host.docker.internal:11434/api/tagsOLLAMA_HOST in .envThe agent may not know about the tools. Try being explicit: "use the ollama_generate tool with gemma3:1b to answer: ..."