Use this skill when planning approaches, comparing implementation options, or validating design decisions by consulting a local LLM via Ollama. Activate when the user says "壁打ちしたい", "ローカルLLMに相談", "アプローチを比較したい", "方針を確認したい", "ローカルで検討", or when brainstorming is needed before committing to an approach. Also activates when offloading lightweight reasoning to reduce Claude API usage.
From ollama-consultnpx claudepluginhub utakatakyosui/c2lab --plugin ollama-consultThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
Consult a local LLM (Ollama) for planning, approach comparison, and decision validation. The local LLM acts as a fast, private sounding board—not a source of ground truth.
Use consult_local_llm when:
Do NOT use for:
Bad: "Should I use React?"
Good: "I'm building a dashboard that needs real-time updates and has 3 devs
familiar with Vue. Should I use React or Vue? Key constraints: 6-week
deadline, no SSR needed."
Use the context parameter to share background information:
question: "Which database indexing strategy fits this access pattern?"
context: "Table has 10M rows. Primary queries: lookup by user_id (80%),
range scan by created_at (15%), full-text search (5%).
PostgreSQL 15, < 500ms p99 latency required."
Ask for comparisons, pros/cons, or numbered options:
"List 3 approaches for [X]. For each: one-line summary, main advantage,
main disadvantage."
The local LLM provides a perspective, not a verdict:
Before consulting, verify Ollama is running and configured:
list_models to confirm connection and available models.claude/settings.local.json has env.OLLAMA_MODEL set to your desired modelollama serveUser: "Redisとin-memoryキャッシュ、どちらが良い?"
1. Call consult_local_llm:
question: "Redis vs in-memory cache for session storage. Which fits better?"
context: "Single-instance Node.js app, ~1000 concurrent users, sessions
expire in 30min, no horizontal scaling planned yet."
2. Review the local LLM's tradeoff analysis
3. Synthesize with your own knowledge and present to user