Help us improve
Share bugs, ideas, or general feedback.
From multi-llm
Run a multi-turn brainstorm between OpenAI and Gemini on a user-supplied topic, then return the transcript for synthesis. Use when the user wants to "brainstorm with other models", get cross-model perspectives, debate an idea, or explore a topic from multiple angles. Supports a research mode where both models use web search.
npx claudepluginhub moebit/claude --plugin multi-llmHow this skill is triggered — by the user, by Claude, or both
Slash command
/multi-llm:brainstorm <topic><topic>This skill is limited to the following tools:
The summary Claude sees in its skill listing — used to decide when to auto-load this skill
This skill runs a back-and-forth dialogue between OpenAI and Gemini on `$ARGUMENTS`, prints the transcript to stdout, and lets you (Claude) synthesize the result for the user.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
This skill runs a back-and-forth dialogue between OpenAI and Gemini on $ARGUMENTS, prints the transcript to stdout, and lets you (Claude) synthesize the result for the user.
Default (3 rounds, no web search):
uv run "${CLAUDE_SKILL_DIR}/scripts/brainstorm.py" --topic "$ARGUMENTS"
Research mode — both models use web search:
uv run "${CLAUDE_SKILL_DIR}/scripts/brainstorm.py" --topic "$ARGUMENTS" --research
Pro tier — when the user asks for the "Pro models", "best models", "highest quality", "deep thinking", or similar, add --pro to use gpt-5.5-pro and gemini-3.1-pro-preview instead of the cheaper defaults:
uv run "${CLAUDE_SKILL_DIR}/scripts/brainstorm.py" --topic "$ARGUMENTS" --pro
Other flags:
--turns N — number of rounds (default 3; each round is one OpenAI turn + one Gemini turn)--start gemini — Gemini opens (default: OpenAI opens)--openai-model NAME — override OpenAI model (default gpt-5.5, gpt-5.5-pro with --pro, or $BRAINSTORM_OPENAI_MODEL)--gemini-model NAME — override Gemini model (default gemini-3.1-flash-lite-preview, gemini-3.1-pro-preview with --pro, or $BRAINSTORM_GEMINI_MODEL)If the user expressed intent like "research it" or "look it up", add --research. If they asked for "a quick exchange" or "just one round", set --turns 1. If they asked for "the best" or "Pro tier", add --pro. These flags compose freely.
OPENAI_API_KEY — OpenAI API keyGEMINI_API_KEY (or GOOGLE_API_KEY) — Google AI API keyIf either is missing, the script exits non-zero with a clear message. Relay it to the user and ask them to export the missing key before re-running.
Don't just paste the transcript back. Synthesize:
The raw transcript is already on the user's screen — your value is judgment on top of it.