From council-of-experts
Consults a council of AI experts using multiple models in parallel. Activate when user says "ask the experts", "consult the experts", "get expert opinions", or requests multi-model expert opinions on complex decisions, architecture, or strategic questions.
npx claudepluginhub michabbb/claude-code --plugin council-of-expertsThis skill uses the workspace's default tool permissions.
Launches 5 subagents IN PARALLEL - each consults one expert and returns the response. Your main context stays clean.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Launches 5 subagents IN PARALLEL - each consults one expert and returns the response. Your main context stays clean.
run_in_background: true)| Expert | CLI | Model |
|---|---|---|
| Grok | opencode | xai/grok-4.20-0309-reasoning |
| Kimi | opencode | openrouter/moonshotai/kimi-k2.5 |
| Gemini | opencode | google/gemini-3-pro-preview |
| MiniMax | opencode | openrouter/minimax/minimax-m2.7 |
| GPT | codex | gpt-5.4 (high reasoning) |
CRITICAL: Experts need sufficient context.
Gather:
Launch ALL 5 Task calls in a SINGLE message. Each subagent runs ONE Bash command.
Task 1 - Grok:
subagent_type: "expert-consultant"
run_in_background: true
prompt: |
Run this Bash command and return the output:
opencode run "@expert [QUESTION_WITH_CONTEXT]" -m xai/grok-4.20-0309-reasoning -f [FILES] --format json 2>&1 | jq -r 'select(.type == "text") | "response: \(.part.text)\nsessionid: \(.sessionID)"'
Task 2 - Kimi:
subagent_type: "expert-consultant"
run_in_background: true
prompt: |
Run this Bash command and return the output:
opencode run "@expert [QUESTION_WITH_CONTEXT]" -m openrouter/moonshotai/kimi-k2.5 -f [FILES] --format json 2>&1 | jq -r 'select(.type == "text") | "response: \(.part.text)\nsessionid: \(.sessionID)"'
Task 3 - Gemini:
subagent_type: "expert-consultant"
run_in_background: true
prompt: |
Run this Bash command and return the output:
opencode run "@expert [QUESTION_WITH_CONTEXT]" -m google/gemini-3-pro-preview -f [FILES] --format json 2>&1 | jq -r 'select(.type == "text") | "response: \(.part.text)\nsessionid: \(.sessionID)"'
Task 4 - MiniMax:
subagent_type: "expert-consultant"
run_in_background: true
prompt: |
Run this Bash command and return the output:
opencode run "@expert [QUESTION_WITH_CONTEXT]" -m openrouter/minimax/minimax-m2.7 -f [FILES] --format json 2>&1 | jq -r 'select(.type == "text") | "response: \(.part.text)\nsessionid: \(.sessionID)"'
Task 5 - GPT:
subagent_type: "expert-consultant"
run_in_background: true
prompt: |
Run this Bash command and return the response + session ID:
codex exec --profile expert --sandbox read-only "[QUESTION_WITH_CONTEXT]
Files to analyze:
[FILE_PATHS]" -m gpt-5.4 -c model_reasoning_effort=\"high\" 2>&1
IMPORTANT: codex MUST use --sandbox read-only to prevent any file modifications!
Use TaskOutput to collect all 5 responses. Each subagent returns:
Present to user:
## Expert Council Results
### Individual Responses
**Grok:** [summary]
**Kimi:** [summary]
**Gemini:** [summary]
**MiniMax:** [summary]
**GPT:** [summary]
### Consensus
- [Points where experts agree]
### Divergent Views
| Expert | Position |
|--------|----------|
### Recommendation
[Best solution combining insights]
### Session IDs
- Grok: [id]
- Kimi: [id]
- Gemini: [id]
- MiniMax: [id]
- GPT: [id]
opencode:
opencode run "Follow-up" -s SESSION_ID -m [MODEL] 2>&1
codex:
codex exec resume SESSION_ID "Follow-up" 2>&1
| CLI | Method |
|---|---|
| opencode | -f /absolute/path/file1 -f /absolute/path/file2 |
| codex | List files in prompt text (codex reads them itself) |
When using --format json, opencode outputs one JSON object per line (NDJSON). Example:
{"type":"step_start","timestamp":1765763743412,"sessionID":"ses_xxx",...}
{"type":"text","timestamp":1765763744943,"sessionID":"ses_xxx","part":{"type":"text","text":"The actual response",...}}
{"type":"step_finish","timestamp":1765763744961,"sessionID":"ses_xxx",...}
Parsing with jq (recommended):
Extract both response and session ID in one clean command:
opencode run "@expert [QUESTION]" -m [MODEL] --format json 2>&1 | jq -r 'select(.type == "text") | "response: \(.part.text)\nsessionid: \(.sessionID)"'
Output format:
response: The expert's answer here
sessionid: ses_xxxxxxxxxxxxx
This filters for the type="text" line and extracts both the response text and session ID in a clean, parseable format.
run_in_background: true for each Tasksubagent_type: "expert-consultant" - custom agent with Bash(opencode *), Bash(codex *) permissions--format json for opencode commands to get session IDs--sandbox read-only - prevents any file modifications!This skill requires the expert-consultant agent at agents/expert-consultant.md with these permissions:
tools: Bash(opencode *), Bash(codex *)