From neely-brain-dump
Use local Ollama models on Hoopa for coding tasks. Only use when the expected output is LONGER than your prompt - writing the prompt costs tokens too.
npx claudepluginhub built-simple/claude-brain-dump-repo --plugin neely-brain-dumpThis skill uses the workspace's default tool permissions.
**The rule is simple:** Only use Ollama when the OUTPUT will be significantly longer than your prompt.
Delegates complex code generation, refactoring, architectural analysis, and review tasks to OpenAI's Codex CLI (GPT-5.3-codex models) via safe workflows with sandboxing and approvals. Activates on explicit triggers like 'use codex' or 'codex exec'.
Leverages OpenAI Codex/GPT models for autonomous code implementation, reviews, and sandboxed task execution. Triggers on 'codex', 'use gpt', 'full-auto' etc.
Routes coding tasks to remote AI models (Claude, Codex, Gemini) via HokiPoki CLI P2P network. Use for second opinions, switching models when stuck, or sharing AI access in provider mode.
Share bugs, ideas, or general feedback.
The rule is simple: Only use Ollama when the OUTPUT will be significantly longer than your prompt.
Writing a prompt to Ollama costs Claude tokens. If you can just write the code yourself in fewer tokens than the prompt would take, do it yourself.
| Task | Prompt Length | Output Length | Use Ollama? |
|---|---|---|---|
| "Write isPrime function" | ~5 tokens | ~50 tokens | ✅ Yes |
| "Write a Dockerfile with multi-stage build" | ~10 tokens | ~200 tokens | ✅ Yes |
| "Write rate limiter class with semaphore" | ~15 tokens | ~300 tokens | ✅ Yes |
| "Reverse a string" | ~5 tokens | ~10 tokens | ❌ No, just write s[::-1] |
| "Add 1 to x" | ~5 tokens | ~5 tokens | ❌ No, just write x + 1 |
| "Fix this typo in line 5" | ~20 tokens | ~10 tokens | ❌ No, just fix it |
Boilerplate-heavy tasks (high output:input ratio):
Tasks where model knows patterns you'd have to look up:
Quick fixes (low output:input ratio):
Context-dependent tasks:
High-stakes code:
http://192.168.1.79:11434/api/generate
curl -s http://192.168.1.79:11434/api/generate \
-d '{"model": "qwen2.5-coder:14b", "prompt": "YOUR_PROMPT. Code only.", "stream": false}' \
| jq -r '.response'
Tip: Add "Code only." to prompts to reduce verbose explanations.
| Task Type | Quality |
|---|---|
| Single functions | ✅ Excellent |
| Algorithms | ✅ Excellent |
| Regex | ✅ Excellent |
| SQL queries | ✅ Excellent |
| TypeScript types | ✅ Excellent |
| Dockerfiles | ✅ Good |
| Bash one-liners | ✅ Good |
| Code explanation | ✅ Good |
| API design | ✅ Good |
| Bug fixes (simple) | ✅ Good |
| Complex async | ⚠️ Can have bugs |
| Multi-step reasoning | ⚠️ Inconsistent |