From agent-almanac
Launches all available agents in parallel waves for hypothesis generation on cross-domain problems where domain is unknown, single agents stall, or diverse perspectives needed. Outputs ranked hypotheses with convergence analysis.
npx claudepluginhub pjt222/agent-almanacThis skill is limited to using the following tools:
Consult all available agents in parallel waves to generate diverse hypotheses for open-ended problems. Each agent reasons through its unique domain lens — a kabalist finds patterns via gematria, a martial-artist proposes conditional branching, a contemplative notices structure by sitting with the data. Convergence across independent perspectives is the primary signal that a hypothesis has merit.
Implements debate protocols, cross-examination patterns, and synthesis techniques for multi-agent teams in idea validation, PRD reviews, and competitive analysis.
Runs structured brainstorming with explorer/challenger agent pairs for product ideas, features, strategies, and milestones. Produces debate-tested proposals over 2-3 rounds.
Recommends analytical lenses or assembles AI teams for multi-perspective analysis when no framework exists, producing a framed inquiry. Activates on framework-absent inquiries.
Share bugs, ideas, or general feedback.
Consult all available agents in parallel waves to generate diverse hypotheses for open-ended problems. Each agent reasons through its unique domain lens — a kabalist finds patterns via gematria, a martial-artist proposes conditional branching, a contemplative notices structure by sitting with the data. Convergence across independent perspectives is the primary signal that a hypothesis has merit.
Write a problem brief that any agent can understand regardless of domain expertise. Include:
## Brief: [Problem Title]
**Problem**: [1-2 sentence statement]
**Examples**:
1. [Input] → [Output] (explain what's known)
2. [Input] → [Output]
3. [Input] → [Output]
4. [Input] → [Output]
5. [Input] → [Output]
**Already tried**: [List failed approaches to avoid rediscovery]
**Success looks like**: [Testable criterion]
**Respond with**:
- Hypothesis: [Your proposed mechanism in one sentence]
- Reasoning: [Why your domain expertise suggests this]
- Confidence: [low/medium/high]
- Testable prediction: [If my hypothesis is correct, then X should be true]
Expected: A brief that is self-contained — an agent receiving only this text has everything needed to reason about the problem.
On failure: If you cannot articulate 5 examples or a verification method, the problem is not ready for multi-agent consultation. Narrow the scope first.
List all available agents and divide them into waves of ~10. Ordering does not matter for the first 2 waves; for subsequent waves, inter-wave knowledge injection improves results.
# List all agents from registry
grep ' - id: ' agents/_registry.yml | sed 's/.*- id: //' | shuf
Assign agents to waves. Plan for 4 waves initially — you may not need all of them (see early stopping in Step 4).
| Wave | Agents | Brief variant |
|---|---|---|
| 1-2 | 20 agents | Standard brief |
| 3 | 10 agents + advocatus-diaboli | Brief + emerging consensus + adversarial challenge |
| 4+ | 10 agents each | Brief + "X is confirmed. Focus on edge cases and failures." |
Expected: A wave assignment table with all agents allocated. Include advocatus-diaboli in Wave 3 (not later) so the adversarial pass informs subsequent waves.
On failure: If fewer than 20 agents are available, reduce to 2-3 waves. The pattern still works with as few as 10 agents, though convergence signals are weaker.
Launch each wave as parallel agents. Use sonnet model for cost efficiency (the value comes from perspective diversity, not individual depth).
Use Claude Code's TeamCreate tool to set up a coordinated team with task tracking. TeamCreate is a deferred tool — fetch it first via ToolSearch("select:TeamCreate").
TeamCreate({ team_name: "unleash-wave-1", description: "Wave 1: open-ended hypothesis generation" })
TaskCreate with the brief and domain-specific framingAgent tool with team_name: "unleash-wave-1" and subagent_type set to the agent's type (e.g., kabalist, geometrist)TaskUpdate with ownerTaskList — teammates mark tasks completed as they finishSendMessage({ type: "shutdown_request" }) and create the next team with the updated brief (Step 4)This gives you built-in coordination: a shared task list tracks which agents have responded, teammates can be messaged for follow-up, and the lead manages wave transitions through task assignment.
For each agent in the wave, spawn it with the brief and a domain-specific framing:
Use the [agent-name] agent to analyze this problem through your domain expertise.
[Paste the brief]
Think about this from your specific perspective as a [agent-description].
[For non-technical agents: add a domain-specific framing, e.g., "What patterns
does your tradition recognize in systems that exhibit this kind of threshold behavior?"]
Respond exactly in the requested format.
Launch all agents in a wave simultaneously using the Agent tool with run_in_background: true. Wait for the wave to complete before launching the next wave (to enable inter-wave knowledge injection in Step 4).
| TeamCreate | Raw Agent | |
|---|---|---|
| Best for | Tier 3 full unleash (40+ agents) | Tier 2 panel (5-10 agents) |
| Coordination | Task list, messaging, ownership | Fire-and-forget, manual collection |
| Inter-wave handoff | Task status carries over | Must track manually |
| Overhead | Higher (team setup per wave) | Lower (single tool call per agent) |
Expected: Each wave returns ~10 structured responses within 2-5 minutes. Agents that fail to respond or return off-format output are noted but do not block the pipeline.
On failure: If more than 50% of a wave fails, check the brief clarity. Common cause: the output template is ambiguous, or the examples are insufficient for non-domain agents to reason about.
After waves 1-2, extract the emerging signal before launching the next wave.
**Update from prior waves**: [N] agents independently proposed [hypothesis family].
Build on this — what explains the remaining cases where this hypothesis fails?
Do NOT simply restate this finding. Extend, challenge, or refine it.
Early stopping guidance: Not every unleash needs all agents. For well-defined problem domains (e.g., codebase analysis), convergence often stabilizes at 30-40 agents. For abstract or open-ended problems (e.g., unknown mathematical transformations), the full roster adds value because the correct domain is genuinely unpredictable. Check convergence after each wave — if the top family's count and null-model ratio have plateaued, additional waves yield diminishing returns.
This prevents rediscovery (where later waves independently re-derive what earlier waves already found) and directs later agents toward the edges of the problem.
Expected: Later waves produce more nuanced, targeted hypotheses that address gaps in the emerging consensus.
On failure: If no convergence appears after 2 waves, the problem may be too unconstrained. Consider narrowing the scope or providing more examples.
After all waves complete, gather all responses into a single document. Deduplicate by grouping hypotheses into families:
Expected: A ranked list of hypothesis families with convergence counts, contributing agents, and representative testable predictions.
On failure: If every hypothesis is unique (no convergence), the signal-to-noise ratio is too low. Either the problem needs more examples, or the agents need a tighter output format.
Test the top hypothesis against a null model to ensure the convergence is meaningful, not an artifact of shared training data.
Expected: The top hypothesis family significantly exceeds chance-level convergence and/or passes programmatic verification.
On failure: If the top hypothesis fails verification, check the second-ranked family. If no family passes, the problem may require a different approach (deeper single-expert analysis, more data, or reformulated examples).
Preferred timing: Wave 3, not post-synthesis. Including advocatus-diaboli in Wave 3 (alongside the inter-wave knowledge injection) is more effective than a standalone adversarial pass after all waves complete. Early challenge lets Waves 4+ refine against the critique rather than piling onto an unchallenged consensus.
If the adversarial pass was already part of Wave 3, this step becomes a final check. If not (e.g., you ran all waves without it), spawn advocatus-diaboli (or senior-researcher) now. For a structured pass, use TeamCreate to stand up a review team with both agents working in parallel against the consensus:
Here is the consensus hypothesis from [N] independent agents:
[Hypothesis]
[Supporting evidence and convergence stats]
Your job: find the strongest counterarguments. Where does this fail?
What alternative explanations are equally consistent with the evidence?
What experiment would definitively falsify this hypothesis?
Expected: A set of counterarguments, edge cases, and a falsification experiment. If the hypothesis survives adversarial scrutiny, it is ready for integration. A good adversarial pass sometimes partially defends the consensus — finding that the design is better than alternatives even if imperfect.
On failure: If the adversarial agent finds a fatal flaw, feed the critique back into a targeted follow-up wave (Tier 3+ iterative mode — select 5-10 agents best positioned to address the specific critique).
Unleash finds problems; teams solve them. Convert verified hypothesis families into actionable issues, then assemble focused teams to resolve each.
create-github-issues skill)TeamCreate:
teams/ matches the problem domain, use itopaque-team (N shapeshifters with adaptive role assignment) — it handles unknown problem shapes without requiring a custom compositionadvocatus-diaboli, contemplative) — they catch implementation risks that technical agents missExpected: Each hypothesis family maps to a tracked issue with a team assigned. The unleash produced the diagnosis; the teams produce the fix.
On failure: If the team composition doesn't match the problem, reassign. Shapeshifter agents can research and design but lack write tools — the team lead must apply their code suggestions.
forage-solutions — ant colony optimization for exploring solution spaces (complementary: narrower scope, deeper exploration)build-coherence — bee democracy for selecting among competing approaches (use after this skill to choose between top hypotheses)coordinate-reasoning — stigmergic coordination for managing information flow between agentscoordinate-swarm — broader swarm coordination patterns for distributed systemsexpand-awareness — open perception before narrowing (complementary: use as individual agent preparation)meditate — clear context noise before launching (recommended before Step 1)