Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
From rlm-factorynpx claudepluginhub richfrem/agent-plugins-skills --plugin rlm-factoryThis skill is limited to using the following tools:
acceptance-criteria.mdevals/evals.jsonevals/results.tsvreferences/acceptance-criteria.mdrequirements.txtresources/distiller_manifest.jsonresources/manifest-index.jsonresources/prompts/rlm/rlm_summarize_general.mdresources/prompts/rlm/rlm_summarize_tool.mdresources/rlm_manifest.jsonGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Configures VPN and dedicated connections like Direct Connect, ExpressRoute, Interconnect for secure on-premises to AWS, Azure, GCP, OCI hybrid networking.
This skill requires Python 3.8+ and standard library only. No external packages needed.
To install this skill's dependencies:
pip-compile ./requirements.in
pip install -r ./requirements.txt
See ./requirements.txt for the dependency lockfile (currently empty — standard library only).
Ollama provides local LLM inference for RLM distillation (seal phase summarization) and embeddings.
Connection refused to 127.0.0.1:11434[DISTILLATION FAILED] for new files# Check if Ollama is already running
curl -sf http://127.0.0.1:11434/api/tags > /dev/null && echo "✅ Ollama running" || echo "❌ Ollama not running"
If running, you're done. If not, proceed.
# Start Ollama in the background
ollama serve &>/dev/null &
# Wait and verify (2-3 seconds)
sleep 3
curl -sf http://127.0.0.1:11434/api/tags > /dev/null && echo "✅ Ollama ready" || echo "❌ Ollama failed to start"
For RLM distillation, the project uses model you define in .env
# List available models
ollama list
# If the model is missing, pull it
ollama pull qwen2:7b
| Symptom | Fix |
|---|---|
Connection refused after start | Wait longer (sleep 5), model may be loading |
ollama: command not found | Ollama not installed — ask user to install from https://ollama.com |
| Port 11434 already in use | Another process on that port — lsof -i :11434 to identify |
/rlm-factory:distill requires Ollama for batch summarization