Consults a council of AI experts using multiple models in parallel. Activate when user says "ask the experts", "consult the experts", "get expert opinions", or requests multi-model expert opinions on complex decisions, architecture, or strategic questions.
/plugin marketplace add michabbb/claude-code/plugin install council-of-experts@michabbb-claude-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Launches 5 subagents IN PARALLEL - each consults one expert and returns the response. Your main context stays clean.
run_in_background: true)| Expert | CLI | Model |
|---|---|---|
| Grok | opencode | xai/grok-4-1-fast |
| Kimi | opencode | opencode/kimi-k2-thinking |
| Gemini | opencode | google/gemini-3-pro-preview |
| MiniMax | opencode | openrouter/minimax/minimax-m2.1 |
| GPT | codex | gpt-5.2 (high reasoning) |
CRITICAL: Experts need sufficient context.
Gather:
Launch ALL 5 Task calls in a SINGLE message. Each subagent runs ONE Bash command.
Task 1 - Grok:
subagent_type: "expert-consultant"
run_in_background: true
prompt: |
Run this Bash command and return the output:
opencode run "@expert [QUESTION_WITH_CONTEXT]" -m xai/grok-4-1-fast -f [FILES] --format json 2>&1 | jq -r 'select(.type == "text") | "response: \(.part.text)\nsessionid: \(.sessionID)"'
Task 2 - Kimi:
subagent_type: "expert-consultant"
run_in_background: true
prompt: |
Run this Bash command and return the output:
opencode run "@expert [QUESTION_WITH_CONTEXT]" -m opencode/kimi-k2-thinking -f [FILES] --format json 2>&1 | jq -r 'select(.type == "text") | "response: \(.part.text)\nsessionid: \(.sessionID)"'
Task 3 - Gemini:
subagent_type: "expert-consultant"
run_in_background: true
prompt: |
Run this Bash command and return the output:
opencode run "@expert [QUESTION_WITH_CONTEXT]" -m google/gemini-3-pro-preview -f [FILES] --format json 2>&1 | jq -r 'select(.type == "text") | "response: \(.part.text)\nsessionid: \(.sessionID)"'
Task 4 - MiniMax:
subagent_type: "expert-consultant"
run_in_background: true
prompt: |
Run this Bash command and return the output:
opencode run "@expert [QUESTION_WITH_CONTEXT]" -m openrouter/minimax/minimax-m2.1 -f [FILES] --format json 2>&1 | jq -r 'select(.type == "text") | "response: \(.part.text)\nsessionid: \(.sessionID)"'
Task 5 - GPT:
subagent_type: "expert-consultant"
run_in_background: true
prompt: |
Run this Bash command and return the response + session ID:
codex exec --profile expert --sandbox read-only "[QUESTION_WITH_CONTEXT]
Files to analyze:
[FILE_PATHS]" -m gpt-5.2 -c model_reasoning_effort=\"high\" 2>&1
IMPORTANT: codex MUST use --sandbox read-only to prevent any file modifications!
Use TaskOutput to collect all 5 responses. Each subagent returns:
Present to user:
## Expert Council Results
### Individual Responses
**Grok:** [summary]
**Kimi:** [summary]
**Gemini:** [summary]
**MiniMax:** [summary]
**GPT:** [summary]
### Consensus
- [Points where experts agree]
### Divergent Views
| Expert | Position |
|--------|----------|
### Recommendation
[Best solution combining insights]
### Session IDs
- Grok: [id]
- Kimi: [id]
- Gemini: [id]
- MiniMax: [id]
- GPT: [id]
opencode:
opencode run "Follow-up" -s SESSION_ID -m [MODEL] 2>&1
codex:
codex exec resume SESSION_ID "Follow-up" 2>&1
| CLI | Method |
|---|---|
| opencode | -f /absolute/path/file1 -f /absolute/path/file2 |
| codex | List files in prompt text (codex reads them itself) |
When using --format json, opencode outputs one JSON object per line (NDJSON). Example:
{"type":"step_start","timestamp":1765763743412,"sessionID":"ses_xxx",...}
{"type":"text","timestamp":1765763744943,"sessionID":"ses_xxx","part":{"type":"text","text":"The actual response",...}}
{"type":"step_finish","timestamp":1765763744961,"sessionID":"ses_xxx",...}
Parsing with jq (recommended):
Extract both response and session ID in one clean command:
opencode run "@expert [QUESTION]" -m [MODEL] --format json 2>&1 | jq -r 'select(.type == "text") | "response: \(.part.text)\nsessionid: \(.sessionID)"'
Output format:
response: The expert's answer here
sessionid: ses_xxxxxxxxxxxxx
This filters for the type="text" line and extracts both the response text and session ID in a clean, parseable format.
run_in_background: true for each Tasksubagent_type: "expert-consultant" - custom agent with Bash(opencode *), Bash(codex *) permissions--format json for opencode commands to get session IDs--sandbox read-only - prevents any file modifications!This skill requires the expert-consultant agent at agents/expert-consultant.md with these permissions:
tools: Bash(opencode *), Bash(codex *)
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.