Use when user asks to "debate", "argue about", "compare perspectives", "stress test idea", "devil advocate", or "tool vs tool". Structured debate between two AI tools with proposer/challenger roles and a verdict.
Facilitates structured debates between AI tools to thoroughly analyze topics and deliver verdicts.
npx claudepluginhub agent-sh/debate[topic] [--tools=tool1,tool2] [--rounds=N] [--effort=low|medium|high|max]You are executing the /debate command. Your job is to parse the user's request, resolve missing parameters interactively, and execute the debate directly.
Parse $ARGUMENTS using both explicit flags and natural language. Flags take priority.
--tools=TOOL1,TOOL2 (comma-separated pair, first is proposer, second is challenger)--rounds=N where N is 1-5--effort=VALUE where VALUE is one of: low, medium, high, max--model-proposer=VALUE (any string)--model-challenger=VALUE (any string)--context=VALUE where VALUE is: diff, file=PATH, or none (passed through to consult skill for each tool invocation)Remove matched flags from $ARGUMENTS.
Tool pair extraction (case-insensitive):
Rounds extraction:
Effort extraction (same as consult):
Topic extraction:
Validation: rounds must be 1-5. Proposer and challenger must differ. If same tool specified for both, show: [ERROR] Proposer and challenger must be different tools.
If no topic found: [ERROR] Usage: /debate "your topic" or /debate codex vs gemini about your topic
MUST resolve ALL missing parameters. Do NOT silently default.
Run all 5 checks in parallel via Bash:
which <tool> 2>/dev/null && echo FOUND || echo NOTFOUND (Unix)where.exe <tool> 2>nul && echo FOUND || echo NOTFOUND (Windows)Check for: claude, gemini, codex, opencode, copilot.
If fewer than 2 tools installed: [ERROR] Debate requires at least 2 AI CLI tools. Install more: npm i -g @anthropic-ai/claude-code, npm i -g @openai/codex
Use a SINGLE AskUserQuestion call for all missing params:
AskUserQuestion:
questions:
- id: "debate-proposer"
header: "Proposer" # SKIP if proposer resolved
question: "Which tool should PROPOSE (argue for)?"
multiSelect: false
options (only installed tools):
- label: "Claude" description: "Deep code reasoning"
- label: "Gemini" description: "Fast multimodal analysis"
- label: "Codex" description: "Agentic coding"
- label: "OpenCode" description: "Flexible model choice"
- label: "Copilot" description: "GitHub-integrated AI"
- id: "debate-challenger"
header: "Challenger" # SKIP if challenger resolved
question: "Which tool should CHALLENGE (find flaws)?"
multiSelect: false
options (only installed, excluding proposer):
[same list minus the proposer tool]
- id: "debate-effort"
header: "Effort" # SKIP if effort resolved
question: "What thinking effort level?"
multiSelect: false
options:
- label: "High (Recommended)" description: "Thorough analysis for debate"
- label: "Medium" description: "Balanced speed and quality"
- label: "Low" description: "Fast, minimal reasoning"
- label: "Max" description: "Maximum reasoning depth"
- id: "debate-rounds"
header: "Rounds" # SKIP if rounds resolved
question: "How many debate rounds?"
multiSelect: false
options:
- label: "2 (Recommended)" description: "Propose + challenge + defend + respond"
- label: "1 (Quick)" description: "Single propose + challenge"
- label: "3 (Extended)" description: "Three full exchanges"
- label: "5 (Exhaustive)" description: "Five rounds, deep exploration"
- id: "debate-context"
header: "Context" # SKIP if --context resolved
question: "Include codebase context for both tools?"
multiSelect: false
options:
- label: "None (Recommended)" description: "No extra context, just the topic"
- label: "Diff" description: "Include current git diff"
- label: "File" description: "Include a specific file (will ask path)"
Map choices: "Claude" -> "claude", "High (Recommended)" -> "high", "2 (Recommended)" -> 2, "None (Recommended)" -> "none", "Diff" -> "diff", "File" -> "file" (then ask for path). Strip " (Recommended)" suffix.
If context resolved to "file": Use a follow-up AskUserQuestion to ask for the file path:
AskUserQuestion:
questions:
- id: "debate-file-path"
header: "File path"
question: "Which file should both tools see?"
multiSelect: false
options:
- label: "src/" description: "Source directory file"
- label: "README.md" description: "Project readme"
The user can type any path via "Other". After getting the path:
.. that escape the project root\\ or // prefix)[ERROR] Context file must be within the project directory
If the file doesn't exist: [ERROR] Context file not found: {PATH}
If valid, set context to file={user_provided_path}.If proposer and challenger resolve to the same tool after selection, show error and re-ask for challenger.
With all parameters resolved (topic, proposer, challenger, effort, rounds, optional model_proposer, model_challenger, context), execute the debate directly.
Invoke the debate skill to load prompt templates, context assembly rules, and synthesis format:
Skill: debate
Args: "[topic]" --proposer=[proposer] --challenger=[challenger] --rounds=[rounds] --effort=[effort]
The skill returns the prompt templates and rules. Use them for all subsequent steps.
For each round (1 through N):
Build Proposer Prompt:
Context assembly rules:
Invoke Proposer via Consult Skill:
Only include --model=[model_proposer] if the user provided a specific model. If model is "omit", empty, or "auto", do NOT pass --model to the consult skill.
Skill: consult
Args: "{proposer_prompt}" --tool=[proposer] --effort=[effort] [--model=[model_proposer]] [--context=[context]]
Set a 240-second timeout on this invocation. If it exceeds 240s, treat as a tool failure for this round.
Parse the JSON result. Extract the response text. Record: round, role="proposer", tool, response, duration_ms.
If the proposer call fails on round 1, abort: [ERROR] Debate aborted: proposer ({tool}) failed on opening round. {error}
If the proposer call fails on round 2+, skip remaining rounds and proceed to Phase 3c (synthesize from completed rounds, note the early stop).
Display to user immediately:
--- Round {round}: {proposer_tool} (Proposer) ---
{proposer_response}
Build Challenger Prompt:
Invoke Challenger via Consult Skill:
Only include --model=[model_challenger] if the user provided a specific model. If model is "omit", empty, or "auto", do NOT pass --model to the consult skill.
Skill: consult
Args: "{challenger_prompt}" --tool=[challenger] --effort=[effort] [--model=[model_challenger]] [--context=[context]]
Set a 240-second timeout on this invocation. If it exceeds 240s, treat as a tool failure for this round.
Parse the JSON result. Record: round, role="challenger", tool, response, duration_ms.
If the challenger call fails on round 1, emit [WARN] Challenger ({tool}) failed on round 1. Proceeding with uncontested proposer position. then proceed to Phase 3c.
If the challenger call fails on round 2+, skip remaining rounds and proceed to Phase 3c.
Display to user immediately:
--- Round {round}: {challenger_tool} (Challenger) ---
{challenger_response}
Assemble context for the next round using the context assembly rules above.
After all rounds complete (or after a partial failure), YOU are the JUDGE. Read all exchanges carefully. Use the synthesis format from the debate skill:
Verdict rules (from the debate skill):
Display the full synthesis using the format from the debate skill's Synthesis Format section.
Write the debate state to {AI_STATE_DIR}/debate/last-debate.json using the schema from the debate skill.
Platform state directory: use the AI_STATE_DIR environment variable if set. Otherwise:
.claude/.opencode/.codex/Create the debate/ subdirectory if it doesn't exist.
Apply the FULL redaction pattern table from the consult skill (plugins/consult/skills/consult/SKILL.md, Output Sanitization section). The skill is the canonical source with all 14 patterns. Do NOT maintain a separate subset here.
The consult skill's table covers: Anthropic keys (sk-*, sk-ant-*), OpenAI project keys (sk-proj-*), Google keys (AIza*), GitHub tokens (ghp_*, gho_*, github_pat_*), AWS keys (AKIA*, ASIA*), env assignments (ANTHROPIC_API_KEY=*, OPENAI_API_KEY=*, GOOGLE_API_KEY=*, GEMINI_API_KEY=*), and auth headers (Bearer *).
Read the consult skill file to get the exact patterns and replacements.
Canonical source:
plugins/consult/skills/consult/SKILL.md. This table is for planning reference only -- always invoke viaSkill: consult, which handles safe question passing, temp file creation, and cleanup. Do NOT execute these commands directly.
| Provider | Safe Command Pattern |
|---|---|
| Claude | claude -p - --output-format json --model "MODEL" --max-turns TURNS --allowedTools "Read,Glob,Grep" < "{AI_STATE_DIR}/consult/question.tmp" |
| Gemini | gemini -p - --output-format json -m "MODEL" < "{AI_STATE_DIR}/consult/question.tmp" |
| Codex | codex exec "$(cat "{AI_STATE_DIR}/consult/question.tmp")" --json -m "MODEL" -c model_reasoning_effort="LEVEL" |
| OpenCode | opencode run - --format json --model "MODEL" --variant "VARIANT" < "{AI_STATE_DIR}/consult/question.tmp" |
| Copilot | copilot -p - < "{AI_STATE_DIR}/consult/question.tmp" |
| Effort | Claude | Gemini | Codex | OpenCode | Copilot |
|---|---|---|---|---|---|
| low | claude-haiku-4-5 (1 turn) | gemini-3-flash-preview | gpt-5.3-codex (low) | default (low) | no control |
| medium | claude-sonnet-4-6 (3 turns) | gemini-3-flash-preview | gpt-5.3-codex (medium) | default (medium) | no control |
| high | claude-opus-4-6 (5 turns) | gemini-3.1-pro-preview | gpt-5.3-codex (high) | default (high) | no control |
| max | claude-opus-4-6 (10 turns) | gemini-3.1-pro-preview | gpt-5.3-codex (high) | default + --thinking | no control |
| Provider | Parse Expression |
|---|---|
| Claude | JSON.parse(stdout).result |
| Gemini | JSON.parse(stdout).response |
| Codex | JSON.parse(stdout).message or raw text |
| OpenCode | Parse JSON events, extract final text block |
| Copilot | Raw stdout text |
| Error | Output |
|---|---|
| No topic provided | [ERROR] Usage: /debate "your topic" or /debate codex vs gemini about your topic |
| Tool not installed | [ERROR] {tool} is not installed. Install with: {install command} |
| Fewer than 2 tools | [ERROR] Debate requires at least 2 AI CLI tools installed. |
| Same tool for both | [ERROR] Proposer and challenger must be different tools. |
| Rounds out of range | [ERROR] Rounds must be 1-5. Got: {rounds} |
| Context file not found | [ERROR] Context file not found: {PATH} |
| Proposer fails round 1 | [ERROR] Debate aborted: proposer ({tool}) failed on opening round. {error} |
| Challenger fails round 1 | [WARN] Challenger ({tool}) failed on round 1. Proceeding with uncontested proposer position. Then synthesize from available exchanges. |
| Any tool fails mid-debate | Synthesize from completed rounds. Note the incomplete round in output. |
| Tool invocation timeout (>240s) | Round 1 proposer: abort with [ERROR] Debate aborted: proposer ({tool}) timed out after 240s. Round 1 challenger: proceed with uncontested position. Round 2+: synthesize from completed rounds, note [WARN] {role} ({tool}) timed out in round {N}. |
| All rounds timeout | [ERROR] Debate failed: all tool invocations timed out. |
# Natural language
/debate codex vs gemini about microservices vs monolith
/debate with claude and codex about our auth implementation
/debate thoroughly gemini vs codex about database schema design
/debate codex vs gemini 3 rounds about event sourcing
# Explicit flags
/debate "Should we use event sourcing?" --tools=claude,gemini --rounds=3 --effort=high
/debate "Redis vs PostgreSQL for caching" --tools=codex,opencode
# Mixed
/debate codex vs gemini --effort=max about performance optimization strategies