Queries Hugging Face datasets via Dataset Viewer API for splits, row previews/pagination, search, filters, parquet links, metadata, and statistics.
From antigravity-awesome-skillsnpx claudepluginhub sickn33/antigravity-awesome-skills --plugin antigravity-awesome-skillsThis skill uses the workspace's default tool permissions.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Use this skill when you need read-only exploration of a Hugging Face dataset through the Dataset Viewer API.
Use this skill to execute read-only Dataset Viewer API calls for dataset exploration and extraction.
/is-valid.config + split with /splits./first-rows./rows using offset and length (max 100)./search for text matching and /filter for row predicates./parquet and totals/metadata via /size and /statistics.https://datasets-server.huggingface.coGEToffset is 0-based.length max is usually 100 for row-like endpoints.Authorization: Bearer <HF_TOKEN>.Validate dataset: /is-valid?dataset=<namespace/repo>List subsets and splits: /splits?dataset=<namespace/repo>Preview first rows: /first-rows?dataset=<namespace/repo>&config=<config>&split=<split>Paginate rows: /rows?dataset=<namespace/repo>&config=<config>&split=<split>&offset=<int>&length=<int>Search text: /search?dataset=<namespace/repo>&config=<config>&split=<split>&query=<text>&offset=<int>&length=<int>Filter with predicates: /filter?dataset=<namespace/repo>&config=<config>&split=<split>&where=<predicate>&orderby=<sort>&offset=<int>&length=<int>List parquet shards: /parquet?dataset=<namespace/repo>Get size totals: /size?dataset=<namespace/repo>Get column statistics: /statistics?dataset=<namespace/repo>&config=<config>&split=<split>Get Croissant metadata (if available): /croissant?dataset=<namespace/repo>Pagination pattern:
curl "https://datasets-server.huggingface.co/rows?dataset=stanfordnlp/imdb&config=plain_text&split=train&offset=0&length=100"
curl "https://datasets-server.huggingface.co/rows?dataset=stanfordnlp/imdb&config=plain_text&split=train&offset=100&length=100"
When pagination is partial, use response fields such as num_rows_total, num_rows_per_page, and partial to drive continuation logic.
Search/filter notes:
/search matches string columns (full-text style behavior is internal to the API)./filter requires predicate syntax in where and optional sort in orderby.Use npx parquetlens with Hub parquet alias paths for SQL querying.
Parquet alias shape:
hf://datasets/<namespace>/<repo>@~parquet/<config>/<split>/<shard>.parquet
Derive <config>, <split>, and <shard> from Dataset Viewer /parquet:
curl -s "https://datasets-server.huggingface.co/parquet?dataset=cfahlgren1/hub-stats" \
| jq -r '.parquet_files[] | "hf://datasets/\(.dataset)@~parquet/\(.config)/\(.split)/\(.filename)"'
Run SQL query:
npx -y -p parquetlens -p @parquetlens/sql parquetlens \
"hf://datasets/<namespace>/<repo>@~parquet/<config>/<split>/<shard>.parquet" \
--sql "SELECT * FROM data LIMIT 20"
--sql "COPY (SELECT * FROM data LIMIT 1000) TO 'export.csv' (FORMAT CSV, HEADER, DELIMITER ',')"--sql "COPY (SELECT * FROM data LIMIT 1000) TO 'export.json' (FORMAT JSON)"--sql "COPY (SELECT * FROM data LIMIT 1000) TO 'export.parquet' (FORMAT PARQUET)"Use one of these flows depending on dependency constraints.
Zero local dependencies (Hub UI):
https://huggingface.co/new-datasetcurl -s "https://datasets-server.huggingface.co/parquet?dataset=<namespace>/<repo>"
Low dependency CLI flow (npx @huggingface/hub / hfjs):
export HF_TOKEN=<your_hf_token>
npx -y @huggingface/hub upload datasets/<namespace>/<repo> ./local/parquet-folder data
npx -y @huggingface/hub upload datasets/<namespace>/<repo> ./local/parquet-folder data --private
After upload, call /parquet to discover <config>/<split>/<shard> values for querying with @~parquet.