Orchestrates formal debates with proposition and opposition sides, coordinating debaters and judges through structured exchanges. Use when running debate exchanges, managing debate rounds, or continuing interrupted debates.
Manages formal debate execution through deterministic state tracking and resumability.
/plugin marketplace add urav06/dialectic/plugin install dialectic@dialectic-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Manages formal debate execution through deterministic state tracking and resumability.
Debates cycle through 2 phases per exchange:
| current_phase | Action Required |
|---|---|
awaiting_arguments | Spawn both debaters in parallel |
awaiting_judgment | Spawn judge to evaluate all new arguments |
After judgment: cycle repeats with current_exchange incremented.
Key Properties:
Check {debate}/debate.md frontmatter (JSON format):
{
"current_exchange": 0,
"current_phase": "awaiting_arguments"
}
Extract motion from the # Motion section (first markdown heading after frontmatter).
Opening Exchange: current_exchange == 0
Rebuttal Exchange: current_exchange >= 1
When current_exchange == 0 and current_phase == awaiting_arguments:
Load template:
Read templates/debater-opening.md from this skill's directory.
Spawn both debaters in parallel:
Use a single message with two Task tool invocations to spawn both debaters simultaneously.
For each side (proposition and opposition):
Substitute placeholders in template:
{motion}: Extracted motion text{side}: Side name (proposition or opposition)Spawn debater:
Use Task tool with subagent_type: "debater"
Prompt: [substituted template content]
Process outputs:
After both debaters complete:
/tmp/prop_arg.json/tmp/opp_arg.jsondebate_ops: python3 {skill_base_dir}/debate_ops process-exchange {debate} 0 --prop-file /tmp/prop_arg.json --opp-file /tmp/opp_arg.jsonCheck result JSON for errors or warnings. On errors, state remains unchanged - report to user and halt. On warnings, note them and continue.
The script creates 6 argument files: prop_000a.md, prop_000b.md, prop_000c.md, opp_000a.md, opp_000b.md, opp_000c.md
State automatically updates to current_phase: awaiting_judgment.
Judge opening arguments:
When current_exchange == 0 and current_phase == awaiting_judgment:
Load template:
Read templates/judge.md from this skill's directory.
Substitute placeholders:
{argument_files}: Space-separated list of all 6 opening arguments:
@{debate}/arguments/prop_000a.md @{debate}/arguments/prop_000b.md @{debate}/arguments/prop_000c.md @{debate}/arguments/opp_000a.md @{debate}/arguments/opp_000b.md @{debate}/arguments/opp_000c.md
{motion}: Extracted motion textSpawn judge:
Use Task tool with subagent_type: "judge"
Prompt: [substituted template content]
Process output:
/tmp/judge.jsondebate_ops: python3 {skill_base_dir}/debate_ops process-judge {debate} --json-file /tmp/judge.jsonCheck result JSON for errors or warnings. On errors, state remains unchanged - report to user and halt. On warnings, note them and continue.
State automatically updates to current_phase: awaiting_arguments, current_exchange: 1.
When current_exchange >= 1 and current_phase == awaiting_arguments:
Build argument context:
{debate}/arguments/prop_*.mdopp_*.mdprop_003 → exchange 3)Load template:
Read templates/debater-rebuttal.md from this skill's directory.
Spawn both debaters in parallel:
Use a single message with two Task tool invocations to spawn both debaters simultaneously.
For proposition debater:
{motion}: Extracted motion text{side}: proposition{exchange}: Current exchange number{your_arguments}: Newline-separated list: @{debate}/arguments/prop_000a.md, @{debate}/arguments/prop_000b.md, etc.{opponent_arguments}: Newline-separated list: @{debate}/arguments/opp_000a.md, @{debate}/arguments/opp_000b.md, etc.For opposition debater:
{motion}: Extracted motion text{side}: opposition{exchange}: Current exchange number{your_arguments}: Newline-separated list of opposition arguments{opponent_arguments}: Newline-separated list of proposition argumentsProcess outputs:
After both debaters complete:
/tmp/prop_arg.json/tmp/opp_arg.jsondebate_ops: python3 {skill_base_dir}/debate_ops process-exchange {debate} {current_exchange} --prop-file /tmp/prop_arg.json --opp-file /tmp/opp_arg.jsonCheck result JSON for errors or warnings. On errors, state remains unchanged - report to user and halt. On warnings, note them and continue.
State automatically updates to current_phase: awaiting_judgment.
Judge rebuttal arguments:
When current_exchange >= 1 and current_phase == awaiting_judgment:
Load template:
Read templates/judge.md from this skill's directory.
Substitute placeholders:
{argument_files}: Space-separated list of both new arguments:
@{debate}/arguments/prop_{current_exchange:03d}.md @{debate}/arguments/opp_{current_exchange:03d}.md
{motion}: Extracted motion textSpawn judge:
Use Task tool with subagent_type: "judge"
Prompt: [substituted template content]
Process output:
/tmp/judge.jsondebate_ops: python3 {skill_base_dir}/debate_ops process-judge {debate} --json-file /tmp/judge.jsonCheck result JSON for errors or warnings. On errors, state remains unchanged - report to user and halt. On warnings, note them and continue.
State automatically updates to current_phase: awaiting_arguments, current_exchange incremented.
After each phase, check if you should continue:
{debate}/debate.mdThe state itself doesn't track "completion" - you decide when done based on user request.
Processing scripts return:
{
"success": true/false,
"argument_id": "prop_001" | ["prop_000a", "prop_000b", "prop_000c"],
"errors": ["fatal errors"],
"warnings": ["non-fatal warnings"]
}
On errors:
On warnings:
Note: By default the tmp files get deleted by the script. But if you face errors while writing to a tmp file because it already exists, just Read it and try again.
Execution can be interrupted at any point and resumed by reading state:
When requested exchanges complete, report current state:
✓ Completed {N} exchanges for '{debate_slug}'
**Current Scores** (zero-sum tug-of-war):
- Proposition: {total} ({count} arguments)
- Opposition: {total} ({count} arguments)
**Next steps**:
- Continue debating: `/debate-run {debate_slug} X` to run X more exchanges
- Generate report: `/debate-report {debate_slug}` to create comprehensive analysis with visualizations
Extract totals and counts from cumulative_scores in {debate}/debate.md frontmatter.
Total exchanges = current_exchange from debate.md.
Note on zero-sum scoring: Positive total = winning, negative total = losing, zero = even. One side typically has positive total, the other negative (tug-of-war).