From agent-teams
Implements debate protocols, cross-examination patterns, and synthesis techniques for multi-agent teams in idea validation, PRD reviews, and competitive analysis.
npx claudepluginhub slgoodrich/agents --plugin agent-teamsThis skill uses the workspace's default tool permissions.
Protocols for structuring productive debate, cross-examination, and synthesis across multi-agent teams.
Orchestrates multi-agent debates with 2-5 dynamic agents in Challenge (select best variant), Strategy (deep analysis with proposals), or Critic (find weaknesses) modes. Triggers on debate, challenge, compare, critique prompts.
Conducts multi-persona debates for founder decisions with 4 grounded personas (Operator, Buyer, Investor, Contrarian) across structured rounds. Outputs transcript, recommendation, and decision log.
Orchestrates dynamic agent teams for iterative peer-to-peer debates on decisions, producing tradeoff maps via step-back moderation and contention analysis.
Share bugs, ideas, or general feedback.
Protocols for structuring productive debate, cross-examination, and synthesis across multi-agent teams.
Auto-loaded by all six agents:
idea-researcher, market-researcher, idea-skeptic - For structured debate and cross-examination in validation sprints and war roomsmarket-fit-reviewer, feasibility-reviewer, scope-reviewer - For cross-referencing in PRD stress testsUse when you need:
Solo decision-making has blind spots. But unstructured debate is just noise. The sweet spot is structured disagreement: agents with different perspectives challenge each other using evidence, following clear protocols that force productive conflict.
Not this: Three agents agreeing with each other to be polite. This: Three agents stress-testing each other's conclusions with pointed questions and evidence.
The goal isn't agreement. It's a well-examined conclusion where every weakness has been probed.
Every team debate follows three phases:
1. Independent Investigation Each agent works in parallel on their assigned dimension. No cross-talk. This prevents groupthink and ensures genuinely independent perspectives.
2. Cross-Examination Agents receive each other's findings and challenge them. One round of structured challenges. Each agent must:
3. Synthesis The lead agent compiles all perspectives into a final deliverable. Must:
Evidence over assertion. "I think this won't work" is weak. "Users in similar markets showed 3% conversion rates [source], suggesting this won't work" is strong.
Steel-man before attacking. Before challenging a claim, restate it in its strongest form. This proves you understood it.
One round, make it count. Cross-examination is a single round. Agents can't go back and forth endlessly. This forces precision.
Disagree and commit. After synthesis, the verdict stands even if an agent disagrees. But dissent gets documented, not silenced.
No false consensus. If agents genuinely disagree, the synthesis says so. "Researchers disagreed on X. Here's why." is better than pretending everyone agreed.
When examining another agent's findings, use these question structures:
Evidence Challenge
"You claim [X]. What evidence supports this beyond [the single source you cited]? If that source is wrong, does your conclusion still hold?"
Assumption Challenge
"Your analysis assumes [Y]. What happens to your conclusion if [Y] is false? Is there evidence that [Y] might not hold?"
Magnitude Challenge
"You identified [risk/opportunity]. How significant is this really? Is this a deal-breaker or a minor concern? What's the actual impact?"
Alternative Explanation
"You attribute [outcome] to [cause]. Could [alternative cause] explain this equally well? Have you considered [different framing]?"
So-What Challenge
"You found [data point]. What does this actually mean for the decision we're making? How should this change our recommendation?"
Strong challenge (changes the conversation):
Weak challenge (wastes a round):
Document the convergence explicitly:
## Convergence Points
All three perspectives agree on:
1. [Finding] - Supported by [evidence from each agent]
2. [Finding] - Supported by [evidence from each agent]
Confidence: HIGH (independent analyses converged)
Agreement from independent investigation is a strong signal. It means the conclusion is robust across different analytical lenses.
Not all disagreements need resolution. Some reflect genuine uncertainty.
Resolvable disagreements (one agent has stronger evidence):
## Resolved Disagreement: [Topic]
- Agent A claimed [X] based on [evidence]
- Agent B claimed [Y] based on [evidence]
- Resolution: Agent A's position is stronger because [reasoning]
- Agent B's concern remains valid as a risk to monitor
Unresolvable disagreements (genuine uncertainty):
## Unresolved: [Topic]
- Agent A: [Position] based on [evidence]
- Agent B: [Opposing position] based on [evidence]
- Why it's unresolved: [Explanation - insufficient data, different frameworks, etc.]
- Implication: This uncertainty should factor into risk assessment
- Recommended action: [How to resolve - more research, user testing, etc.]
When the synthesis verdict goes against one agent's recommendation:
## Dissenting View: [Agent Name]
[Agent] recommends [alternative] because [reasoning].
This dissent is noted because [why it matters].
The majority verdict differs because [reasoning].
If [condition], revisit this dissent.
When combining perspectives with different relevance:
Don't just list findings. Tell the story:
Every synthesis must end with a clear verdict:
Never end with "it depends" without specifying what it depends on and how to resolve it.
When agents send findings to teammates:
## [Agent Name] Findings: [Topic]
### Key Claims
1. [Claim] - Confidence: [HIGH/MEDIUM/LOW] - Evidence: [summary]
2. [Claim] - Confidence: [HIGH/MEDIUM/LOW] - Evidence: [summary]
### Evidence Base
- [Source 1]: [What it shows]
- [Source 2]: [What it shows]
### Gaps and Limitations
- [What I couldn't find or verify]
- [Assumptions I'm making]
### Preliminary Recommendation
[What I think this means for the decision]
Agents agree too quickly to avoid conflict. Every team debate should have at least one genuine challenge. If nobody disagrees, someone isn't doing their job.
Deferring to the "lead" agent's view without genuine examination. The lead synthesizes; they don't dictate.
Agents competing to find the most evidence rather than the most relevant evidence. Five weak sources don't beat one strong one.
Cross-examination expanding into entirely new investigations. Stick to challenging existing findings, not opening new lines of inquiry.
Scoring with decimal precision (7.3/10) when the underlying evidence only supports rough buckets (high/medium/low). Don't manufacture certainty.
references/:Remember: The point of multi-agent debate isn't to be adversarial. It's to be thorough. Every blind spot found in debate is a blind spot that won't surprise you in the market.