From agent-almanac
Facilitates leaderless consensus among distributed agents via bee-inspired scouting, quorum sensing, threshold voting, and commitment dynamics. For multi-agent AI, distributed databases, or group decisions without central authority.
npx claudepluginhub pjt222/agent-almanacThis skill uses the workspace's default tool permissions.
---
Evaluates competing approaches via independent assessments, reasoning-out-loud advocacy, confidence thresholds, and deadlock resolution for coherent decisions. Use when selecting among multiple options or justifying high-stakes choices.
Dispatches prompts to multiple LLM providers in parallel via MCP for consensus on high-stakes decisions, synthesizing responses for quality gates and design reviews.
Orchestrates multi-LLM expert panels to pressure-test high-stakes decisions via reversibility scoring and structured deliberation protocols.
Share bugs, ideas, or general feedback.
Achieve collective agreement across distributed agents without a central authority — using scout advocacy, threshold quorum sensing, and commitment dynamics modeled on honeybee swarm decision-making.
coordinate-swarm when the coordination requires explicit collective decisionsEnsure the decision space is adequately explored before any advocacy begins.
Expected: A set of independently evaluated proposals with quality scores and assessments. No option has been eliminated by a single evaluator; diversity of perspective is preserved.
On failure: If scouts converge on the same option without independent evaluation, the scouting was not truly independent. Rerun with explicit information barriers. If too many options survive to the advocacy phase, raise the minimum quality threshold. If too few survive, lower it or add more scouts.
Allow scouts to advocate for their preferred options, with advocacy intensity proportional to quality.
Advocacy Dynamics:
┌─────────────────────────────────────────────────────────┐
│ Scout A advocates Option 1 (quality 85) ──→ ◉◉◉◉◉ │
│ Scout B advocates Option 2 (quality 70) ──→ ◉◉◉ │
│ Scout C advocates Option 3 (quality 45) ──→ ◉ │
│ │
│ Uncommitted agents inspect: │
│ Agent D inspects Option 1 → confirms → joins ◉◉◉◉◉◉ │
│ Agent E inspects Option 2 → confirms → joins ◉◉◉◉ │
│ Agent F inspects Option 3 → disagrees → inspects Opt 1│
│ → confirms → joins ◉◉◉◉◉◉◉│
│ │
│ Over time: Option 1 advocacy grows, Option 3 fades │
└─────────────────────────────────────────────────────────┘
Expected: Advocacy for the best option(s) grows over time as agents independently verify quality. Advocacy for weaker options fades as verification fails. The group naturally converges toward the strongest option without any agent dictating the choice.
On failure: If advocacy doesn't converge (two options remain neck-and-neck), the options may be genuinely equivalent — proceed to quorum with either, or use a tiebreaker rule. If advocacy converges too fast on a mediocre option, increase the independence of evaluation (more scouts, stricter information barriers) and add a mandatory cross-inspection step.
Define the commitment threshold that triggers collective action.
Expected: A clear quorum moment where enough agents have independently committed to one option. The decision is legitimate because it emerged from independent evaluation, not authority or coercion.
On failure: If quorum is never reached within the time budget, escalate to Step 4 (deadlock resolution). If quorum is reached but agents are unhappy, the advocacy phase was too short — agents committed without adequate evaluation. If the consensus was wrong (discovered after the fact), the independent scouting was insufficient — increase scout diversity and evaluation thoroughness in the next cycle.
Break decision gridlock when the natural consensus process stalls.
Expected: Deadlock resolved through the appropriate intervention. The resolution is visible and accepted by the group as fair process, even if individual agents preferred a different outcome.
On failure: If deadlocks recur on the same decision, the decision framing may be wrong. Step back and ask: can the decision be decomposed into smaller, independent decisions? Can the scope be reduced? Is there a "try both and see" option? Sometimes the best consensus is "we'll run a time-boxed experiment."
Evaluate whether the consensus process produced a good decision, not just a decision.
Expected: A feedback loop that improves consensus quality over time. The group learns to scout more effectively, advocate more honestly, and commit more confidently.
On failure: If consensus quality metrics are poor (high regret, slow decisions), audit the process for structural failures: insufficient scouting diversity, advocacy without verification, or thresholds set too low for the decision type. Rebuild the specific failing stage rather than overhauling the entire process.
coordinate-swarm — foundational coordination framework that supports the signal-based consensus mechanismdefend-colony — collective defense decisions often require rapid consensus under threatscale-colony — consensus mechanisms must adapt when the group size changes significantlydissolve-form — morphic skill for controlled dismantling, where consensus before dissolution is criticalplan-sprint — sprint planning involves team consensus on commitment scopeconduct-retrospective — retrospectives are a form of consensus-building about process improvementbuild-coherence — AI self-application variant; maps bee democracy to single-agent multi-path reasoning with confidence thresholds and deadlock resolution