From agent-tower
Assembles AI agent council with dynamic personas to generate parallel opinions on a query, anonymously rank them, and synthesize weighted final response.
npx claudepluginhub bayramannakov/agent-tower-plugin## Overview The council mode runs multiple AI agents in parallel to gather diverse perspectives on a task, then synthesizes their opinions into a final answer. **Stages:** 1. **Stage 1**: All agents provide independent opinions (parallel) 2. **Stage 2**: Each agent reviews and ranks others' opinions (anonymized) 3. **Stage 3**: Chairman synthesizes all opinions weighted by rankings **Features:** - Dynamic persona suggestion based on question analysis - Anonymized peer ranking to avoid bias - Weighted synthesis based on peer rankings ## Pre-Execution: Analyze Question & Gather Configurat...
/councilRuns a multi-LLM council with adversarial debate for a specified subagent (implementer, reviewer, architect, etc.) and task, producing specialized outputs like code, reviews, or plans.
/councilAgent council review using blackboard pattern for comprehensive PR analysis
/councilStructured evaluation tool that uses real external LLM CLIs (Codex, Gemini, OpenCode, Pi) to get genuinely independent model perspectives. Unlike brainstorm (free-form creative exploration), council is a **rigid evaluation tool** for when you have defined options and need a verdict.
/councilSummon the LLM Council for multi-model deliberation on complex technical questions.
/councilInvoke the five-agent council on any complex task. Opus captain decomposes and synthesizes. Researcher and synthesizer run in parallel and cross-pollinate via SendMessage. Clarity reads their output and asks follow-ups. Haiku janitor flags bloat. Captain removes cuts and delivers.
/councilGet multiple AI perspectives on architecture decisions. Fans out the same prompt to Opus, gpt5.2-codex, and Gemini 3, then compares their responses.
The council mode runs multiple AI agents in parallel to gather diverse perspectives on a task, then synthesizes their opinions into a final answer.
Stages:
Features:
Before running the council:
First, analyze the user's question to determine appropriate perspectives/personas. Consider:
Use AskUserQuestion to suggest relevant personas based on your analysis:
Use AskUserQuestion with:
- question: "I've analyzed your question. Which perspectives would be most valuable?"
- header: "Perspectives"
- multiSelect: true
- options: [Generate 3-4 relevant personas based on the question type]
Example for "best hiking in Seattle":
- label: "Local Expert", description: "Deep knowledge of Seattle area trails and conditions"
- label: "Outdoor Enthusiast", description: "Practical hiking experience and recommendations"
- label: "Research Analyst", description: "Comprehensive data on trail ratings and reviews"
- label: "Critical Thinker", description: "Questions assumptions about 'best' and considers trade-offs"
Example for "should we use microservices":
- label: "Systems Architect", description: "Scalability, infrastructure, distributed systems"
- label: "DevOps Engineer", description: "Deployment, monitoring, operational complexity"
- label: "Developer Experience", description: "Team productivity, learning curve, tooling"
- label: "Devil's Advocate", description: "Challenge assumptions, identify hidden risks"
Also ask about number of agents if not specified:
Use AskUserQuestion with:
- question: "How many agents should participate?"
- header: "Agents"
- options:
- label: "2 agents", description: "Quick analysis with two perspectives"
- label: "3 agents (Recommended)", description: "Balanced coverage"
- label: "All available", description: "Maximum perspectives"
Skip questions for options explicitly provided in $ARGUMENTS.
Parse these from $ARGUMENTS:
--agents N - Number of agents to use (default: all available)--personas JSON - Custom personas as JSON array (see below)--no-personas - Disable automatic persona assignment--verbose or -v - Show detailed progressCheck which agents are available:
python3 "${CLAUDE_PLUGIN_ROOT}/scripts/list_agents.py"
Run the council with custom personas (based on user selections):
python3 "${CLAUDE_PLUGIN_ROOT}/scripts/run_council.py" --task "YOUR_TASK_HERE" --personas '[{"name":"Local Expert","focus":"Seattle area knowledge"},{"name":"Outdoor Enthusiast","focus":"hiking experience"},{"name":"Critical Thinker","focus":"trade-offs and nuance"}]' [--agents N] [-v]
Or run with automatic persona inference:
python3 "${CLAUDE_PLUGIN_ROOT}/scripts/run_council.py" --task "YOUR_TASK_HERE" [--agents N] [-v]
Parse the JSON result and present it as:
Opinions:
| Agent | Persona | Opinion Summary | Confidence |
|---|---|---|---|
| claude | Security Analyst | Key finding... | 85% |
| codex | Systems Architect | Key finding... | 90% |
| gemini | Devil's Advocate | Key finding... | 70% |
Peer Rankings (1=best):
Chairman's Synthesis:
[The synthesized final answer]
Consensus Level: X%
Key Insights:
Dissenting Views:
/tower:council "Should we use TypeScript or JavaScript for the frontend?"
/tower:council "Review the security of this authentication flow" --agents 3
/tower:council "Evaluate this startup idea: AI meal planning" --verbose