From huggingface-skills
Queries HuggingFace benchmark leaderboards to find top models for tasks like coding or reasoning, filters by device memory constraints, enriches with model sizes, and outputs comparison tables.
npx claudepluginhub huggingface/skills --plugin huggingface-vision-trainerThis skill uses the workspace's default tool permissions.
Finds the best models for a task by querying official HF benchmark leaderboards, enriching
Detects system CPU, RAM, GPU and ranks hundreds of LLM models by hardware fit, speed, quality, context capacity. CLI/TUI recommends for Ollama, llama.cpp, MLX.
Compares AI and LLM models on benchmarks, capabilities, cost, latency, context window, and task-specific fit to help select optimal models for production use cases and budgets.
Finds AI models on Replicate using search queries and curated collections, fetches input/output schemas, and guides selection of official, recent models by tags and run counts.
Share bugs, ideas, or general feedback.
Finds the best models for a task by querying official HF benchmark leaderboards, enriching results with model size data, filtering for what fits on the user's device, and returning a comparison table with benchmark scores.
Extract from the user's message:
If device is not mentioned, skip filtering entirely and return the highest-performing models regardless of size. If the task is genuinely ambiguous, ask one clarifying question.
When a device is specified, extract its available memory (unified RAM for Apple Silicon, VRAM for discrete GPUs) and apply:
Examples: 16GB → 8B fp16 / 32B Q4 — 24GB VRAM → 12B fp16 / 48B Q4 — 8GB → 4B fp16 / 16B Q4
Fetch the full list of official HF benchmarks:
curl -s -H "Authorization: Bearer $(cat ~/.cache/huggingface/token)" \
"https://huggingface.co/api/datasets?filter=benchmark:official&limit=500" | jq '[.[] | {id, tags, description}]'
Read the returned list and select the datasets most relevant to the user's task — match on dataset id, tags, and description. Use your judgment; don't limit yourself to 2-3. Aim for comprehensive coverage: if 5 benchmarks clearly cover the task, use all 5.
For each selected benchmark dataset:
curl -s -H "Authorization: Bearer $(cat ~/.cache/huggingface/token)" \
"https://huggingface.co/api/datasets/<namespace>/<repo>/leaderboard" | jq '[.[:15] | .[] | {rank, modelId, value, verified}]'
Collect model IDs and scores across all benchmarks. If a leaderboard returns an error (404, 401, etc.), skip it and note it in the output.
For the top 10-15 candidate model IDs, get model infos.
# REST API
curl -s -H "Authorization: Bearer $(cat ~/.cache/huggingface/token)" \
"https://huggingface.co/api/models/org/model1" | jq '{safetensors, tags, cardData}'
# CLI (hf-cli)
hf models info org/model1 --json | jq '{safetensors, tags, cardData}'
Extract from each response:
safetensors.total → convert to B (e.g., 7_241_748_480 → "7.2B")license:apache-2.0, license:mit, etc.)safetensors is absent, parse size from the model name (look for "7b", "8b", "13b", "70b", "72b", etc.)If a device was specified:
If no device was mentioned: skip all size filtering — just rank by benchmark score.
Then: rank by benchmark score (descending), keep top 5-8 models.
Include proprietary models (GPT-4, Claude, Gemini) if they appear on leaderboards, but flag them as "API only / not self-hostable". If the user explicitly asked for local/open models only, exclude them.
| # | Model | Params | [Benchmark 1] | [Benchmark 2] | License | On device |
|---|-------|--------|--------------|--------------|---------|-----------|
| ⭐1 | [org/name](https://huggingface.co/org/name) | 7B | 85.2% | — | Apache 2.0 | Yes (fp16) |
| 2 | [org/name](https://huggingface.co/org/name) | 13B | 83.1% | 71.5% | MIT | Q4 only |
| 3 | [org/name](https://huggingface.co/org/name) | 70B | 90.0% | 81.0% | Llama | Too large |
https://huggingface.co/<model_id>— for benchmarks where the model wasn't evaluatedYes (fp16), Q4 only, Too large, API onlyAfter presenting the table, ask the user: "Would you like to run [top recommended model]?"
If they say yes, ask whether they'd prefer to:
hub_repo_search with filters=["<task>"] sorted by trendingScorehub_repo_search for popular models tagged with the task, note that results are by popularity rather than benchmark score