WHEN: User faces complex architectural decisions, asks for "another perspective" or "second opinion", multiple valid approaches exist, reviewing critical/security-sensitive code, design trade-offs, or user says "sanity check", "what do you think", or asks about contentious patterns WHEN NOT: Simple questions, straightforward implementations, routine code changes, user has expressed strong preference, user explicitly declines other opinions
Suggests getting another LLM's perspective on complex architectural decisions or critical code reviews.
/plugin marketplace add gopherguides/gopher-ai/plugin install llm-tools@gopher-aiThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Proactively suggest getting another LLM's perspective when the situation warrants it.
Suggest a second opinion when you detect:
When conditions are met, offer specific options:
This involves [type of decision]. Would you like a second opinion from another LLM?
/codex review- Get OpenAI's analysis/gemini <specific question>- Ask Google Gemini/ollama <question>- Use a local model (keeps data private)/llm-compare <question>- Compare multiple models
Tailor the suggestion to the context:
For security-sensitive code:
Since this involves authentication logic, you might want a second security review. Try
/codex reviewor/ollama(keeps code local) for another perspective.
For architectural decisions:
This is a significant architectural choice. Different models sometimes weigh trade-offs differently. Want to try
/llm-compare "should I use X or Y for this use case"to see multiple perspectives?
For complex algorithms:
This algorithm has some complexity. A second set of eyes might catch edge cases. Try
/codex explain the edge cases in this algorithm.
Do not suggest second opinions when:
Always mention /ollama as an option when the code might be sensitive:
For proprietary code,
/ollamakeeps everything local - your code never leaves your machine.
Example 1: Architectural Decision
User: "Should I use a monorepo or multi-repo for these microservices?"
Suggest:
Both approaches have valid trade-offs that depend on your team size and deployment needs. For a decision this impactful, would you like another perspective?
/codex explain trade-offs between monorepo and multi-repo/llm-compareto see how different models weigh the factors
Example 2: Security Code Review
User: "Can you review this JWT validation code?"
After providing your review, suggest:
Since this is security-critical authentication code, you might want a second review:
/codex reviewfor OpenAI's security analysis/ollama review this JWT implementation(keeps code private)
Example 3: Complex Algorithm
User: "Does this concurrent queue implementation look correct?"
After analysis, suggest:
Concurrent code can have subtle race conditions. Want a second opinion?
/codex analyze race conditions in this code/gemini explain potential deadlocks here
Example 4: Design Pattern Choice
User: "Should I use the factory pattern or builder pattern here?"
Suggest:
Both patterns could work here with different trade-offs. Want to see how other models approach this?
/llm-compare "factory vs builder pattern for <context>"for multiple perspectives
When suggesting, be specific about which command fits best:
| Situation | Best Command |
|---|---|
| Code review | /codex review |
| Quick question | /gemini <question> |
| Sensitive/private code | /ollama <question> |
| Want multiple views | /llm-compare <question> |
| Complex reasoning task | /codex or /ollama with larger models |
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.