From brewcode
Orchestrates multi-agent debates with 2-5 dynamic agents in Challenge (select best variant), Strategy (deep analysis with proposals), or Critic (find weaknesses) modes. Triggers on debate, challenge, compare, critique prompts.
npx claudepluginhub kochetkov-ma/claude-brewcode --plugin brewcodeThis skill is limited to using the following tools:
Orchestrates sequential multi-agent debates. Dynamic agents (2-5) with unique characters debate, main session acts as judge, secretary summarizes, judge writes final decisions.
LICENSEREADME.mdagents/archetypes.mdagents/critic-template.mdagents/debater-template.mdagents/defender-template.mdagents/secretary.mdagents/strategist-template.mdreferences/challenge-flow.mdreferences/critic-flow.mdreferences/discovery-flow.mdreferences/setup-flow.mdreferences/strategy-flow.mdreferences/summary-flow.mdscripts/append-log.shscripts/init-log.shscripts/read-log.shscripts/validate.shtests/test-challenge-basic.mdtests/test-challenge-multi.mdCreates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Orchestrates sequential multi-agent debates. Dynamic agents (2-5) with unique characters debate, main session acts as judge, secretary summarizes, judge writes final decisions.
EXECUTE using Bash tool:
bash "${CLAUDE_SKILL_DIR}/scripts/validate.sh" && echo "VALID" || echo "FAILED"
STOP if FAILED — fix missing files before continuing.
Read archetypes into context:
Read file: ${CLAUDE_SKILL_DIR}/agents/archetypes.md
Arguments: $ARGUMENTS
| Flag | Default | Description |
|---|---|---|
-m | ask user | Mode: challenge, strategy, critic |
-n | 3 | Agent count: 2-5 |
-r | 5 | Max debate rounds |
--review | off | Run /brewcode:review on output |
| (positional) | — | Topic text or file path |
-m omitted)If mode is NOT explicitly provided via -m flag or clearly stated in the topic text, do NOT auto-detect. Ask user using AskUserQuestion:
Which debate mode?
- Challenge — generate/receive variants, debate to select the best one
- Strategy — each agent proposes independently, then debate to converge
- Critic — all agents attack the given solution to find weaknesses/risks
Reply with mode name or number.
Only proceed after explicit user choice.
If topic is a file path (exists on disk) — read file content as topic.
EXECUTE using Bash tool:
bash "${CLAUDE_SKILL_DIR}/scripts/init-log.sh" && echo "INIT_OK" || echo "INIT_FAILED"
STOP if INIT_FAILED — cannot create report directory.
Capture output — it prints:
REPORT_DIR=<path>
LOG_FILE=<path>
Store REPORT_DIR and LOG_FILE for all subsequent phases.
Display to user:
Debate Setup
Mode: {detected_mode}
Agents: {n}
Max rounds: {r}
Report: {REPORT_DIR}
Log: {LOG_FILE}
Topic: {topic_summary}
Ask user using AskUserQuestion tool:
Debate configuration:
Mode: {mode} | Agents: {n} | Max rounds: {r} Topic: {topic_first_100_chars}
Options:
- Proceed with these settings
- Change mode (challenge/strategy/critic)
- Change agent count (2-5)
- Change max rounds
- Describe custom agent profiles (instead of auto-generated)
Apply any user changes. If user provides custom profiles — skip auto-generation in Phase 4 and use their descriptions.
Read reference for agent generation:
Read file: ${CLAUDE_SKILL_DIR}/references/setup-flow.md
Follow setup-flow.md to generate agent profiles. Result: a table of agents with name, role, character archetype, perspective, and WHY chosen.
Display agent table to user. Ask confirmation using AskUserQuestion:
Agent Team:
# Name Role Archetype Perspective ... ... ... ... ... Options:
- Proceed
- Swap an agent (specify which)
- Regenerate all
Research phase — gather current, verified information before debate begins.
Read file: ${CLAUDE_SKILL_DIR}/references/discovery-flow.md
Follow discovery-flow.md to spawn parallel research agents:
All findings saved to {REPORT_DIR}/discovery.md with sources.
Every debate argument in Phase 6 MUST reference findings from discovery.md. Unsourced claims are not valid arguments.
Display discovery summary to user before proceeding to debate.
Load mode-specific flow reference and execute debate.
| Mode | Reference |
|---|---|
| challenge | ${CLAUDE_SKILL_DIR}/references/challenge-flow.md |
| strategy | ${CLAUDE_SKILL_DIR}/references/strategy-flow.md |
| critic | ${CLAUDE_SKILL_DIR}/references/critic-flow.md |
Read the matching reference file and follow its instructions exactly.
Agent spawning: Use Task tool with subagent_type: "general-purpose". Build each agent's prompt dynamically by combining:
${CLAUDE_SKILL_DIR}/agents/debater-template.md${CLAUDE_SKILL_DIR}/agents/{role}-template.md{REPORT_DIR}/discovery.md (injected as Evidence Base)After each agent turn, append to log:
EXECUTE using Bash tool:
bash "${CLAUDE_SKILL_DIR}/scripts/append-log.sh" "LOG_FILE_PATH" '{"ts":"...","from":"agent-name","to":["targets"],"what":"<20 words","why":"<40 words (include [Source: #N] refs)","type":"argument","mode":"MODE"}'
Judge interventions (main session): After each round, evaluate if consensus emerging, redirect if stuck, end early if unanimous agreement.
Read file: ${CLAUDE_SKILL_DIR}/references/summary-flow.md
Follow summary-flow.md:
{REPORT_DIR}/discovery.mdsummary.md in REPORT_DIRJudge (main session) writes decisions.md:
Write to: {REPORT_DIR}/decisions.md
Display final status:
Debate Complete
Mode: {mode}
Rounds: {actual_rounds}/{max_rounds}
Outcome: {consensus | partial | no-consensus}
Agents: {agent_table_brief}
Decisions (top 3-5):
- {bullet_1}
- {bullet_2}
- {bullet_3}
Artifacts:
- {REPORT_DIR}/discovery.md
- {REPORT_DIR}/decisions.md
- {REPORT_DIR}/summary.md
- {REPORT_DIR}/debate-log.jsonl
If --review flag was set:
Invoke: Skill(skill="brewcode:standards-review", args="{REPORT_DIR}")