From jeremylongshore-claude-code-plugins-plus-skills
Builds model explainability tool operations for ML training, including data preparation, model training, hyperparameter tuning, and experiment tracking. Activates on 'model explainability tool' phrases.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin langchain-py-packThis skill is limited to using the following tools:
This skill provides automated assistance for model explainability tool tasks within the ML Training domain.
Explains ML model predictions using SHAP, LIME, and feature importance to identify influential features and debug behavior.
Assists with model export helper operations in ML deployment, offering step-by-step guidance, production-ready code, configurations, and best practices for serving, MLOps pipelines, monitoring, and optimization.
Computes SHAP values and generates plots (waterfall, beeswarm, bar, scatter, force, heatmap) to explain ML model predictions, feature importance, bias. Supports XGBoost, PyTorch, TensorFlow, black-box models.
Share bugs, ideas, or general feedback.
This skill provides automated assistance for model explainability tool tasks within the ML Training domain.
This skill activates automatically when you:
Example: Basic Usage Request: "Help me with model explainability tool" Result: Provides step-by-step guidance and generates appropriate configurations
| Error | Cause | Solution |
|---|---|---|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |
Part of the ML Training skill category. Tags: ml, training, pytorch, tensorflow, sklearn