From llm-externalizer
Use when asking for OpenRouter model details — supported params, pricing, latency, uptime, quantization. Trigger with "openrouter model info", "or-model-info", "what params does X support", "show pricing for", "check model support".
npx claudepluginhub emasoft/emasoft-plugins --plugin llm-externalizer<model-id>This skill uses the workspace's default tool permissions.
Query OpenRouter's `/v1/models/{exact_id}/endpoints` for a specific model and display
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
Query OpenRouter's /v1/models/{exact_id}/endpoints for a specific model and display
context length, pricing, supported request-body parameters, quantization, uptime,
latency, and throughput. Uses the LLM Externalizer CLI (not the MCP tool) so it works
from subagents — MCP tools from plugins are not available in subagent contexts.
llm-externalizer CLI on PATH (bundled with the plugin)$OPENROUTER_API_KEY set, OR active profile is OpenRouter-backedCopy this checklist and track your progress:
:free / :thinking / :beta
suffix. Also scan for optional format flags:
- --no-color / --nocolor / --bw / --mono → forward --no-color
- --markdown / --plain → forward --markdown
- --json / --raw → forward --jsonnpx llm-externalizer model-info "<exact-id>" [flags]Per endpoint: context, max_completion, quantization, capability flags (reasoning, tools, structured output, caching), pricing ($/M tokens), uptime (5m/30m/1d), latency + throughput percentiles, supported_parameters. Live data, no cache.
# Default colored table
npx llm-externalizer model-info "nvidia/nemotron-3-super-120b-a12b:free"
# Compare providers (Llama 3.3 has 17 endpoints)
npx llm-externalizer model-info "meta-llama/llama-3.3-70b-instruct"
# Markdown table — renders in any markdown viewer
npx llm-externalizer model-info "google/gemini-2.5-flash" --markdown
# Raw JSON to stdout (for jq / scripts)
npx llm-externalizer model-info "anthropic/claude-sonnet-4.5" --json
# Raw JSON written to a file
npx llm-externalizer model-info "x-ai/grok-4.1-fast" --json grok-info.json
See references/example-output.md for a full sample:
And references/use-cases.md for more scenarios:
| Error | Resolution |
|---|---|
OpenRouter returned 404 | Wrong model id — check case, vendor prefix, :free / :thinking suffix |
No OpenRouter auth token available | Set $OPENROUTER_API_KEY or switch to an openrouter-remote profile |
Network error | Retry once; check /llm-externalizer:llm-externalizer-discover for service status |
OpenRouter returned no endpoints | Model deprecated — suggest alternative |
Full table in references/errors.md: