From mlx
On-demand ML/data science library expert. Use when the user asks how to use any function, class, or method from NumPy, Pandas, scikit-learn, Matplotlib, TensorFlow, Keras, PyTorch, Seaborn, SciPy, statsmodels, XGBoost, LightGBM, Hugging Face Transformers, OpenCV, NLTK, spaCy, Plotly, Dask, PySpark, SQLAlchemy, or Jupyter. Fetches and synthesizes official API docs, parameter reference, and working code examples. Also use when the user asks "how do I do X in pandas/numpy/torch/sklearn", needs to understand a deep learning layer or training loop, asks about NLP pipelines, computer vision transforms, statistical tests, SQL ORM patterns, or big data ops.
npx claudepluginhub damionrashford/mlx --plugin mlxThis skill is limited to using the following tools:
**Context:** $ARGUMENTS
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Context: $ARGUMENTS
Run process.py resolve to get the prioritized list of URLs to fetch:
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve \
--library <library-name-or-alias> \
--query "<function, class, or topic>"
The script returns a JSON object with fetch_in_order — a list of URLs ranked by specificity.
Start with priority 1. If it returns 404 or empty content, move to priority 2, then 3.
Library aliases accepted: See references/guide.md for the full alias table.
Examples:
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve --library pandas --query "DataFrame.groupby"
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve --library torch --query "autograd"
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve --library sklearn --query "cross_val_score"
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve --library transformers --query "pipeline"
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve --library scipy --query "hypothesis testing"
To list all supported libraries:
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py list
Use WebFetch on the URLs returned by Step 1, in order. Stop at the first URL that contains useful content. Synthesize into a direct answer including:
Do not paste raw documentation. Synthesize it.
When the question spans multiple libraries (e.g. "how do I use a PyTorch model with scikit-learn's cross-validation"), run Step 1 for each library in parallel, fetch both, then synthesize a combined answer that shows how they integrate.
Keras vs tf.keras: Keras 3.x (keras.io) is standalone and backend-agnostic. tf.keras
is the older TF-bundled version. If the user has TF < 2.16, they likely have tf.keras.
Check which one they're importing before advising.
PyTorch torch. vs torch.nn.functional: Many operations exist in both places with
different call conventions. torch.nn.Conv2d is a module (stateful), F.conv2d is a
function. The docs are on different pages — resolve for the right one.
scikit-learn class paths: API URLs use the full dotted path, e.g.
sklearn.linear_model.LogisticRegression, not just LogisticRegression. Include the
module prefix in the --query argument for direct API lookups.
Pandas 2.x breaking changes: DataFrame.append is removed in 2.0. df.swaplevel
behavior changed. If the user shows old code, check the 2.0 migration guide:
https://pandas.pydata.org/docs/whatsnew/v2.0.0.html
Hugging Face pipeline task names: They changed between versions. Always fetch the
current docs rather than recalling task names from memory (e.g. "text-generation" vs
"text2text-generation").
spaCy model names: en_core_web_sm/md/lg/trf are not installed by default. The docs
show nlp = spacy.load("en_core_web_sm") but users need python -m spacy download en_core_web_sm
first. Always mention this.
OpenCV Python bindings: The Python docs at docs.opencv.org are C++ first. Prefer
fetching the Python tutorials (/tutorial_py_*) over the raw C++ API pages.
PySpark version: API paths differ between Spark 3.x and older versions. The default
URL targets latest. If the user specifies a version, adjust the URL.
Dask DataFrame is NOT Pandas: Dask DataFrames don't support all Pandas operations. Always check the Dask API index rather than assuming Pandas parity.
LightGBM vs XGBoost parameter names: They use different names for the same concept
(e.g. num_leaves in LightGBM vs max_leaves in XGBoost). When helping with both,
always fetch both sets of parameter docs.
User: "How do I use pandas pivot_table?"
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve --library pandas --query "DataFrame.pivot_table"
Fetch priority-1 URL → synthesize parameters, example, gotchas about aggfunc defaulting to mean.
User: "How does autograd work in PyTorch?"
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve --library torch --query "autograd"
Fetch the autograd topic page → explain requires_grad, .backward(), zero_grad(), with example.
User: "How do I fine-tune BERT with the Trainer API?"
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve --library transformers --query "trainer"
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve --library transformers --query "bert"
Fetch both in parallel → synthesize a complete fine-tuning workflow.
User: "How do I run a t-test in Python?"
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py resolve --library scipy --query "hypothesis testing"
Fetch stats topic page → show scipy.stats.ttest_ind and ttest_rel with example and interpretation.
User: "How do I do X?"
Run:
uv run ${CLAUDE_SKILL_DIR}/scripts/process.py list
Scan the list for the relevant library. If the task spans multiple (e.g. plot a pandas DataFrame with Seaborn), resolve for both, fetch in parallel, synthesize combined answer.
Read references/guide.md for: