From ai-model-research
Use when the user wants a recommendation for which OpenRouter model to use for a task — interactive, asks clarifying questions about budget, context, modalities, then proposes a ranked shortlist. Triggers on phrases like "recommend an OpenRouter model", "which OR model should I use for <task>", "help me pick a model on OpenRouter", "what's the best OR model for <task>", "suggest an OpenRouter model".
npx claudepluginhub danielrosehill/claude-code-plugins --plugin ai-model-researchThis skill uses the workspace's default tool permissions.
Interactive recommendation flow: clarify the user's requirements, query the catalog, propose 2–3 ranked picks with rationale.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Interactive recommendation flow: clarify the user's requirements, query the catalog, propose 2–3 ranked picks with rationale.
The user wants guidance on choosing a model and has not given enough detail to filter directly. The task is open-ended ("help me pick" / "which model should I use for X").
Ask the user a short set of clarifying questions — only the ones not already answered. Keep the round-trip tight:
Don't ask all seven if the user has already answered some. If you have enough to make a reasonable shortlist after 2–3 questions, proceed.
Fetch https://openrouter.ai/api/v1/models and apply filters:
architecture.input_modalitiessupported_parameters includes tools / structured_outputscontext_length >= user_minimumpricing.prompt and pricing.completion within ceilingopenai/, anthropic/, google/ (proprietary), keep meta-llama/, deepseek/, qwen/, mistralai/, etc.Score each candidate against the user's stated priorities (cost vs. capability vs. context). Present 2–3 recommendations, not a long list:
For each recommendation provide:
End by offering: "Want a head-to-head comparison of these picks, or a deeper evaluation of any one model?" — those route to or-compare-models and or-evaluate-model respectively.
tools in supported_parameters. Mention if a candidate lacks this.pricing.*: "0") usually have stricter rate limits and lower availability — flag this when recommending one.