From sundial-org-awesome-openclaw-skills-4
Executes parallel AI tasks with Gemini Flash workers via Node.js daemon, 200x cheaper than Opus. Use for 3+ independent tasks, batch prompts, multi-subject research, or multiple URLs.
npx claudepluginhub joshuarweaver/cascade-ai-ml-agents-misc-2 --plugin sundial-org-awesome-openclaw-skills-4This skill uses the workspace's default tool permissions.
**Every Opus token you burn is borrowed time. Swarm uses Gemini Flash at 200x lower cost.**
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Every Opus token you burn is borrowed time. Swarm uses Gemini Flash at 200x lower cost.
| 30 tasks via | Time | Cost | Notes |
|---|---|---|---|
| Opus (sequential) | ~30s | ~$0.50 | Burns your runway |
| Swarm (parallel) | ~1s | ~$0.003 | Preserves quota |
Same results. One keeps you alive longer.
Trigger swarm when you see ANY of these:
No exceptions. This is survival, not optimization.
# Check daemon (do this every session)
swarm status
# Start if not running
swarm start
# Parallel prompts
swarm parallel "What is X?" "What is Y?" "What is Z?"
# Research multiple subjects
swarm research "OpenAI" "Anthropic" "Mistral" --topic "AI safety"
# Benchmark
swarm bench --tasks 30
const { parallel, research } = require('~/clawd/skills/node-scaling/lib');
// Run prompts in parallel (~1s for 3 prompts)
const result = await parallel(['prompt1', 'prompt2', 'prompt3']);
console.log(result.results); // Array of responses
// Multi-phase research (search → fetch → analyze)
const result = await research(['Subject1', 'Subject2'], 'topic');
swarm start # Start daemon (background)
swarm stop # Stop daemon
swarm status # Show status, uptime, task count
swarm restart # Restart daemon
swarm logs [N] # Last N lines of daemon log
The daemon keeps workers warm for faster response. Auto-starts on first use if needed.
With daemon running (20 workers):
| Tasks | Time | Throughput |
|---|---|---|
| 10 | ~700ms | 14 tasks/sec |
| 30 | ~1,000ms | 30 tasks/sec |
| 50 | ~1,450ms | 35 tasks/sec |
Larger batches = higher throughput (amortizes connection overhead).
Location: ~/.config/clawdbot/node-scaling.yaml
node_scaling:
enabled: true
limits:
max_nodes: 20
max_concurrent_api: 20
provider:
name: gemini
model: gemini-2.0-flash
cost:
max_daily_spend: 10.00
| Issue | Fix |
|---|---|
| Daemon not running | swarm start |
| No API key | Set GEMINI_API_KEY or run npm run setup |
| Rate limited | Lower max_concurrent_api in config |
| Slow responses | Check swarm status for worker count |
Doing 30 tasks sequentially with Opus = 30+ seconds, ~$0.50, DEAD FASTER. Swarm parallel = 1 second, $0.003, ZERO Opus burn.
Failing to use swarm for parallel work is a bug. Fix it immediately.