From mlx
Explain model predictions with SHAP, LIME, integrated gradients, and permutation importance. Generates summary plots, waterfall charts, and force plots. Use when debugging predictions, auditing for bias, or communicating model behavior to stakeholders.
npx claudepluginhub damionrashford/mlx --plugin mlxThis skill is limited to using the following tools:
Generate model explanations with SHAP, LIME, integrated gradients, and permutation importance.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Generate model explanations with SHAP, LIME, integrated gradients, and permutation importance.
# Auto-detect model type and run SHAP
uv run ${CLAUDE_SKILL_DIR}/scripts/shap_explain.py model.joblib data/test.csv
# Output: explanations/shap_summary.png
| Model type | Recommended explainer |
|---|---|
| sklearn tree (RF, XGBoost) | SHAP TreeExplainer |
| sklearn linear | SHAP LinearExplainer |
| PyTorch/TF | SHAP DeepExplainer or captum |
| Any black-box | SHAP KernelExplainer (slow) or LIME |
shap.summary_plot() — global feature importance (beeswarm)shap.waterfall_plot() — single prediction breakdownshap.force_plot() — interactive prediction visualizationshap.dependence_plot() — feature interaction effectsPartialDependenceDisplay — marginal effect of one featureSee references/explainability-guide.md for complete documentation and code examples.