By lykhoyda
Safely execute Bash commands and Git operations by automatically blocking destructive actions like rm -rf / or ~, git force-push to main/master, and git reset --hard on protected branches (main/master/release/*) through a PreToolUse hook. Validates using jq, grep, and git before allowing tool runs.
npx claudepluginhub lykhoyda/ask-llm --plugin ask-llmCoordinates multi-LLM brainstorming by (1) performing its own independent Claude Opus research on the topic and (2) consulting external providers (Gemini, Codex, Ollama) via a single foreground Bash dispatch, then synthesizing all findings into consensus points, unique insights, and actionable recommendations. Claude's findings are weighted higher when verified against real repository state.
Runs an isolated Codex code review in a separate context window. Uses confidence-based filtering to report only high-priority issues. Use when you want a second opinion from OpenAI Codex on code changes, diffs, or architecture decisions.
Runs an isolated Gemini code review in a separate context window. Uses confidence-based filtering to report only high-priority issues. Use when you want a second opinion from Gemini on code changes, diffs, or architecture decisions.
Runs an isolated Ollama code review using a local LLM. Uses confidence-based filtering to report only high-priority issues. Runs entirely locally — no data leaves your machine.
Get a second opinion from a local Ollama LLM on your current code changes. Analyzes staged/unstaged diffs and returns prioritized findings. No API keys needed. Use when user asks to "review with Ollama", "local code review", or "review offline".
Send a topic to ALL LLM providers (Gemini, Codex, Ollama) in parallel while Claude Opus performs its own independent research in parallel. Synthesizes findings from up to four participants. Shortcut for /brainstorm gemini,codex,ollama <topic>. Requires Ollama to be running locally.
Send a topic to multiple LLM providers in parallel while Claude Opus performs its own independent research in parallel, then synthesize all findings. Usage /brainstorm [providers] <topic>. External providers default to gemini,codex. Example /brainstorm gemini,codex,ollama "review this architecture"
Get a second opinion from OpenAI Codex on your current code changes. Analyzes staged/unstaged diffs and returns prioritized findings. Use when user asks to "review with Codex", "Codex code review", or "ask Codex to check my code".
Get a second opinion from Gemini on your current code changes. Analyzes staged/unstaged diffs and returns prioritized findings. Use when user asks to "review with Gemini", "Gemini code review", or "ask Gemini to check my code".
This skill should be used when the user asks to "review my code with multiple providers", "get reviews from Gemini and Codex", "multi-provider review", "review changes", or wants independent code reviews from both Gemini and Codex in parallel.
External LLM integration tools for Claude Code. Get second opinions from Codex (OpenAI) and Gemini (Google) on architecture, design, and code review.
Code review, compare, and debate tools using multiple AI models
Delegate tasks to Codex, Gemini, and OpenCode AI agents via Owlex MCP
Delegate plan execution to Codex CLI via ASP. Part of cc-multi-cli-plugin. Requires the `multi` plugin.
Consult multiple AI coding agents (Gemini, OpenAI, Grok, Perplexity, plus codex and gemini CLIs when installed) to get diverse perspectives on coding problems
Executes bash commands
Hook triggers when Bash tool is used
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Multi-LLM integration for second opinions and task delegation
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim