WHEN: User faces complex architectural decisions, asks for "another perspective" or "second opinion", multiple valid approaches exist, reviewing critical/security-sensitive code, design trade-offs, or user says "sanity check", "what do you think", or asks about contentious patterns WHEN NOT: Simple questions, straightforward implementations, routine code changes, user has expressed strong preference, user explicitly declines other opinions
Proactively suggests getting a second LLM's perspective on complex architectural decisions, security-sensitive code reviews, or when users ask for "sanity checks." Triggers on ambiguous trade-offs, critical code, or explicit requests like "another perspective," but skips simple tasks or when users have strong preferences.
/plugin marketplace add gopherguides/gopher-ai/plugin install llm-tools@gopher-aiThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Proactively suggest getting another LLM's perspective when the situation warrants it.
Suggest a second opinion when you detect:
When conditions are met, offer specific options:
This involves [type of decision]. Would you like a second opinion from another LLM?
/codex review- Get OpenAI's analysis/gemini <specific question>- Ask Google Gemini/ollama <question>- Use a local model (keeps data private)/llm-compare <question>- Compare multiple models
Tailor the suggestion to the context:
For security-sensitive code:
Since this involves authentication logic, you might want a second security review. Try
/codex reviewor/ollama(keeps code local) for another perspective.
For architectural decisions:
This is a significant architectural choice. Different models sometimes weigh trade-offs differently. Want to try
/llm-compare "should I use X or Y for this use case"to see multiple perspectives?
For complex algorithms:
This algorithm has some complexity. A second set of eyes might catch edge cases. Try
/codex explain the edge cases in this algorithm.
Do not suggest second opinions when:
Always mention /ollama as an option when the code might be sensitive:
For proprietary code,
/ollamakeeps everything local - your code never leaves your machine.
Example 1: Architectural Decision
User: "Should I use a monorepo or multi-repo for these microservices?"
Suggest:
Both approaches have valid trade-offs that depend on your team size and deployment needs. For a decision this impactful, would you like another perspective?
/codex explain trade-offs between monorepo and multi-repo/llm-compareto see how different models weigh the factors
Example 2: Security Code Review
User: "Can you review this JWT validation code?"
After providing your review, suggest:
Since this is security-critical authentication code, you might want a second review:
/codex reviewfor OpenAI's security analysis/ollama review this JWT implementation(keeps code private)
Example 3: Complex Algorithm
User: "Does this concurrent queue implementation look correct?"
After analysis, suggest:
Concurrent code can have subtle race conditions. Want a second opinion?
/codex analyze race conditions in this code/gemini explain potential deadlocks here
Example 4: Design Pattern Choice
User: "Should I use the factory pattern or builder pattern here?"
Suggest:
Both patterns could work here with different trade-offs. Want to see how other models approach this?
/llm-compare "factory vs builder pattern for <context>"for multiple perspectives
When suggesting, be specific about which command fits best:
| Situation | Best Command |
|---|---|
| Code review | /codex review |
| Quick question | /gemini <question> |
| Sensitive/private code | /ollama <question> |
| Want multiple views | /llm-compare <question> |
| Complex reasoning task | /codex or /ollama with larger models |
This skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.