Help us improve
Share bugs, ideas, or general feedback.
From agenthub
Multi-agent collaboration plugin that spawns N parallel subagents competing on the same task via git worktree isolation. Agents work independently, results are evaluated by metric or LLM judge, and the best branch is merged. Use when: user wants multiple approaches tried in parallel — code optimization, content variation, research exploration, or any task that benefits from parallel competition. Requires: a git repo.
npx claudepluginhub ciciliaeth/claude-skills --plugin agenthubHow this skill is triggered — by the user, by Claude, or both
Slash command
/agenthub:agenthubThe summary Claude sees in its skill listing — used to decide when to auto-load this skill
Spawn N parallel AI agents that compete on the same task. Each agent works in an isolated git worktree. The coordinator evaluates results and merges the winner.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Creates, reads, edits, and analyzes .docx files using docx-js for new documents, pandoc for text extraction, Python scripts for XML unpacking/validation/changes, and LibreOffice for conversions.
Share bugs, ideas, or general feedback.
Spawn N parallel AI agents that compete on the same task. Each agent works in an isolated git worktree. The coordinator evaluates results and merges the winner.
| Command | Description |
|---|---|
/hub:init | Create a new collaboration session — task, agent count, eval criteria |
/hub:spawn | Launch N parallel subagents in isolated worktrees |
/hub:status | Show DAG state, agent progress, branch status |
/hub:eval | Rank agent results by metric or LLM judge |
/hub:merge | Merge winning branch, archive losers |
/hub:board | Read/write the agent message board |
/hub:run | One-shot lifecycle: init → baseline → spawn → eval → merge |
When spawning with --template, agents follow a predefined iteration pattern:
| Template | Pattern | Use Case |
|---|---|---|
optimizer | Edit → eval → keep/discard → repeat x10 | Performance, latency, size |
refactorer | Restructure → test → iterate until green | Code quality, tech debt |
test-writer | Write tests → measure coverage → repeat | Test coverage gaps |
bug-fixer | Reproduce → diagnose → fix → verify | Bug fix approaches |
Templates are defined in references/agent-templates.md.
Trigger phrases:
The main Claude Code session is the coordinator. It follows this lifecycle:
INIT → DISPATCH → MONITOR → EVALUATE → MERGE
Run /hub:init to create a session. This generates:
.agenthub/sessions/{session-id}/config.yaml — task config.agenthub/sessions/{session-id}/state.json — state machine.agenthub/board/ — message board channelsRun /hub:spawn to launch agents. For each agent 1..N:
.agenthub/board/dispatch/isolation: "worktree"Run /hub:status to check progress:
dag_analyzer.py --status --session {id} shows branch stateprogress/ channel has agent updatesRun /hub:eval to rank results:
Run /hub:merge to finalize:
git merge --no-ff winner into base branchgit tag hub/archive/{session}/agent-{i}Each subagent receives this prompt pattern:
You are agent-{i} in hub session {session-id}.
Your task: {task description}
Instructions:
1. Read your assignment at .agenthub/board/dispatch/{seq}-agent-{i}.md
2. Work in your worktree — make changes, run tests, iterate
3. Commit all changes with descriptive messages
4. Write your result summary to .agenthub/board/results/agent-{i}-result.md
5. Exit when done
Agents do NOT see each other's work. They do NOT communicate with each other. They only write to the board for the coordinator to read.
hub/{session-id}/agent-{N}/attempt-{M}
YYYYMMDD-HHMMSS)Frontier = branch tips with no child branches. Equivalent to AgentHub's "leaves" query.
python scripts/dag_analyzer.py --frontier --session {id}
The DAG is append-only:
Location: .agenthub/board/
| Channel | Writer | Reader | Purpose |
|---|---|---|---|
dispatch/ | Coordinator | Agents | Task assignments |
progress/ | Agents | Coordinator | Status updates |
results/ | Agents + Coordinator | All | Final results + merge summary |
---
author: agent-1
timestamp: 2026-03-17T14:30:22Z
channel: results
parent: null
---
## Result Summary
- **Approach**: Replaced O(n²) sort with hash map
- **Files changed**: 3
- **Metric**: 142ms (baseline: 180ms, delta: -38ms)
- **Confidence**: High — all tests pass
{seq:03d}-{author}-{timestamp}.mdBest for: benchmarks, test pass rates, file sizes, response times.
python scripts/result_ranker.py --session {id} \
--eval-cmd "pytest bench.py --json" \
--metric p50_ms --direction lower
The ranker runs the eval command in each agent's worktree directory and parses the metric from stdout.
Best for: code quality, readability, architecture decisions.
The coordinator reads each agent's diff (git diff base...agent-branch) and ranks by:
Run metric first. If top agents are within 10% of each other, use LLM judge to break ties.
init → running → evaluating → merged
→ archived (if no winner)
State transitions managed by session_manager.py:
| From | To | Trigger |
|---|---|---|
init | running | /hub:spawn completes |
running | evaluating | All agents return |
evaluating | merged | /hub:merge completes |
evaluating | archived | No winner / all failed |
The coordinator should act when:
| Signal | Action |
|---|---|
| All agents crashed | Post failure summary, suggest retry with different constraints |
| No improvement over baseline | Archive session, suggest different approaches |
| Orphan worktrees detected | Run session_manager.py --cleanup {id} |
Session stuck in running | Check board for progress, consider timeout |
# Copy to your Claude Code skills directory
cp -r engineering/agenthub ~/.claude/skills/agenthub
# Or install via ClawHub
clawhub install agenthub
| Script | Purpose |
|---|---|
hub_init.py | Initialize .agenthub/ structure and session |
dag_analyzer.py | Frontier detection, DAG graph, branch status |
board_manager.py | Message board CRUD (channels, posts, threads) |
result_ranker.py | Rank agents by metric or diff quality |
session_manager.py | Session state machine and cleanup |