From distributed-gummy-orchestrator
Orchestrate gummy-agents across distributed network using 'd' command for load-balanced, multi-host AI development
npx claudepluginhub human-frontier-labs-inc/human-frontier-labs-marketplace --plugin distributed-gummy-orchestratorThis skill uses the workspace's default tool permissions.
Coordinate gummy-agent tasks across your Tailscale network using the `d` command for intelligent load balancing and multi-host AI-powered development.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Coordinate gummy-agent tasks across your Tailscale network using the d command for intelligent load balancing and multi-host AI-powered development.
This skill activates when you want to:
✅ Distribute gummy tasks across multiple hosts
✅ Load-balanced AI development
✅ Multi-host coordination
✅ Network-wide specialist monitoring
[Main Claude] ──> [Orchestrator] ──> dw command ──> [Network Nodes]
│ │
├──> Load Analysis ├──> gummy-agent
├──> Host Selection ├──> Specialists
├──> Sync Management └──> Tasks
└──> Task Distribution
1. Load-Balanced Execution
# User request: "Run database optimization on best host"
# Agent:
1. Execute: dw load
2. Parse metrics (CPU, memory, load average)
3. Calculate composite scores
4. Select optimal host
5. Sync codebase: dw sync <host>
6. Execute: dw run <host> "cd <project> && gummy task 'optimize database queries'"
2. Parallel Distribution
# User request: "Test on all platforms"
# Agent:
1. Get hosts: dw status
2. Filter by availability
3. Sync all: for host in hosts; do dw sync $host; done
4. Launch parallel: dw run host1 "test" & dw run host2 "test" & ...
5. Aggregate results
3. Network-Wide Monitoring
# User request: "Show all specialists"
# Agent:
1. Get active hosts: dw status
2. For each host: dw run <host> "gummy-watch status"
3. Parse specialist states
4. Aggregate and display
Main orchestration logic - coordinates distributed gummy execution.
Functions:
select_optimal_host() - Choose best node based on loadsync_and_execute_gummy() - Sync code + run gummy taskparallel_gummy_tasks() - Execute multiple tasks simultaneouslymonitor_all_specialists() - Aggregate specialist status across networkUsage:
from scripts.orchestrate_gummy import select_optimal_host, sync_and_execute_gummy
# Find best host
optimal = select_optimal_host(task_type="database")
# Returns: {'host': 'node-1', 'score': 0.23, 'cpu': 15%, 'mem': 45%}
# Execute on optimal host
result = sync_and_execute_gummy(
host=optimal['host'],
task="optimize user queries",
project_dir="/path/to/project"
)
Wrapper for d command operations.
Functions:
get_load_metrics() - Execute dwload and parse resultsget_host_status() - Execute dwstatus and parse availabilitysync_directory() - Execute dwsync to target hostrun_remote_command() - Execute dwrun on specific hostUser Query: "Run database optimization on optimal host"
Agent Actions:
get_load_metrics() to fetch cluster loadselect_optimal_host(task_type="database") to choose best nodesync_directory(host, project_path) to sync codebasedwrun <host> "cd project && gummy task 'optimize database queries'"gummy-watchUser Query: "Run tests across all available nodes"
Agent Actions:
get_host_status() to get available hostssync_directory(host, project)dwrun <host> "cd project && gummy task 'run test suite'" &User Query: "Show all running specialists across my network"
Agent Actions:
dwstatusdwrun <host> "command -v gummy"dwrun <host> "ls -la ~/.gummy/specialists"User Query: "I have database work and API work - distribute optimally"
Agent Actions:
dw run <cpu-host> "gummy task 'optimize database queries'" &
dw run <io-host> "gummy task 'build REST API endpoints'" &
try:
result = d_run(host, command)
except SSHConnectionError:
# Retry with different host
fallback = select_optimal_host(exclude=[failed_host])
result = d_run(fallback, command)
if not sync_successful:
# Fall back to local execution
return execute_local_gummy(task)
if not load_data:
# Use round-robin distribution
return round_robin_host_selection()
Load Metrics Caching:
/tmp/d-load-cache.jsonHost Availability:
/tmp/d-status-cache.jsonSpecialist State:
This skill activates when user mentions:
Distributed Operations:
Gummy Integration:
Load Balancing:
Coordination:
Monitoring:
Examples That Activate:
Does NOT Activate (local operations):
👤: "Run database optimization on the best available host"
🤖: I'll find the optimal host and execute the task there.
[Executes: dw load]
[Analysis: node-1 has lowest composite score (0.18)]
[Executes: dw sync node-1]
[Executes: dw run node-1 "cd ~/project && gummy task 'optimize database queries'"]
✅ Task launched on node-1 (CPU: 12%, Mem: 35%, Load: 0.18)
Monitor with: ssh node-1 'gummy-watch'
👤: "Run tests across all available nodes"
🤖: I'll distribute test execution across your network.
[Executes: dw status]
[Found: 4 available hosts]
[Syncing codebase to all hosts...]
✓ node-1 synced
✓ node-2 synced
✓ node-3 synced
✓ node-4 synced
[Launching parallel test execution...]
node-1: Running unit tests
node-2: Running integration tests
node-3: Running e2e tests
node-4: Running performance tests
[All tests complete in 8 minutes (vs 32 minutes sequential)]
Results:
✅ unit: 1,247 passed
✅ integration: 89 passed
✅ e2e: 34 passed
⚠️ performance: 3 slow queries detected
👤: "I need to optimize database queries and build new API endpoints"
🤖: I'll distribute these optimally based on task characteristics.
[Analyzing tasks...]
- Database optimization: CPU-intensive
- API development: I/O-bound
[Fetching cluster load...]
node-1: CPU 15%, I/O 45% → Best for database work
node-2: CPU 65%, I/O 12% → Best for API work
[Distributing tasks...]
✓ Database optimization → node-1
✓ API development → node-2
[Syncing and launching...]
node-1: database-expert specialist active
node-2: api-developer specialist active
Both tasks running in parallel. Expected completion: 15 minutes.
Monitor:
ssh node-1 'gummy-watch'
ssh node-2 'gummy-watch'
👤: "Show all specialists running across my network"
🤖: Scanning network for active specialists...
[Checking all hosts...]
╔═══════════════════════════════════════════════════════╗
║ DISTRIBUTED SPECIALIST DASHBOARD ║
╚═══════════════════════════════════════════════════════╝
┌─ node-1 ──────────────────────────────────────────────┐
│ ⚡ database-expert Sessions: 8 Cost: $0.02 │
│ 💤 testing-specialist Sessions: 3 Cost: $0.01 │
└───────────────────────────────────────────────────────┘
┌─ node-2 ──────────────────────────────────────────────┐
│ ⚡ api-developer Sessions: 5 Cost: $0.01 │
│ ⚡ frontend-specialist Sessions: 12 Cost: $0.03 │
└───────────────────────────────────────────────────────┘
┌─ node-3 ──────────────────────────────────────────────┐
│ 💤 security-specialist Sessions: 2 Cost: $0.01 │
└───────────────────────────────────────────────────────┘
Total: 5 active specialists across 3 hosts
Total cost today: $0.08
👤: "Sync my codebase to node-3 andw run frontend specialist there"
🤖: I'll sync and execute on node-3.
[Current directory: ~/projects/myapp]
[Syncing to node-3...]
Sending: src/ components/ lib/ package.json
Excluding: node_modules .git dist
✓ Synced 1,247 files (45 MB) in 3 seconds
[Launching frontend specialist on node-3...]
✓ specialist created (session: abc123)
⚡ Working on task...
Task active on node-3. Monitor: ssh node-3 'gummy-watch'
d command (distributed CLI)gummy-agent installed on remote hostsCreate ~/.config/distributed-gummy/config.yaml:
# Load balancing weights
load_weights:
cpu: 0.4
memory: 0.3
load_average: 0.3
# Sync exclusions
sync_exclude:
- node_modules
- .git
- dist
- build
- .DS_Store
- "*.log"
# Host preferences for task types
host_preferences:
database:
- node-1 # High CPU
frontend:
- node-2 # High memory
testing:
- node-3 # Dedicated test node
# Check Tailscale connectivity
tailscale status
# Check SSH access
ssh <host> echo "OK"
# Verify dw command works
dw status
# Manual sync test
dw sync <host>
# Check rsync
which rsync
# Check disk space on remote
dw run <host> "df -h"
# Check gummy installation
dw run <host> "which gummy"
# Install if needed
dw run <host> "brew install WillyV3/tap/gummy-agent"
dwload command1.0.0 - Initial release