From model-explainability-tool
Explains ML model predictions using SHAP, LIME, and feature importance to identify influential features and debug behavior.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin model-explainability-toolThis skill is limited to using the following tools:
Interpret machine learning model predictions using SHAP, LIME, and feature importance analysis to explain model behavior.
Explains machine learning models using SHAP values, LIME approximations, feature importance, partial dependence plots, and attention visualizations for debugging, trust, and compliance.
Provides guidance for DataRobot model explainability: prediction explanations via SHAP values, feature impact analysis, and diagnostics like ROC curves and confusion matrices. For interpreting model decisions.
Computes SHAP values and generates plots (waterfall, beeswarm, bar, scatter, force, heatmap) to explain ML model predictions, feature importance, bias. Supports XGBoost, PyTorch, TensorFlow, black-box models.
Share bugs, ideas, or general feedback.
Interpret machine learning model predictions using SHAP, LIME, and feature importance analysis to explain model behavior.
This skill empowers Claude to analyze and explain machine learning models. It helps users understand why a model makes certain predictions, identify the most influential features, and gain insights into the model's overall behavior.
This skill activates when you need to:
User request: "Explain why this loan application was rejected."
The skill will:
User request: "Interpret the customer churn model and identify the most important factors."
The skill will:
This skill integrates with other data analysis and visualization plugins to provide a comprehensive model understanding workflow. It can be used in conjunction with data cleaning and preprocessing plugins to ensure data quality and with visualization tools to present the explanation results in an informative way.
The skill produces structured output relevant to the task.