Generate model explainability reports (SHAP, LIME, feature importance)
Generate comprehensive ML model explainability reports with SHAP, LIME, and feature importance analysis to make black-box models transparent and interpretable.
/plugin marketplace add anton-abyzov/specweave/plugin install sw-ml@specweaveYou are generating explainability artifacts for an ML model in a SpecWeave increment. Make the black box transparent.
from specweave import ModelExplainer
explainer = ModelExplainer(model, X_train)
importance = explainer.feature_importance()
Create: feature-importance.png
shap_values = explainer.shap_summary()
Create: shap-summary.png (beeswarm plot)
for feature in top_features:
pdp = explainer.partial_dependence(feature)
Create: pdp-plots/ directory
# Explain sample predictions
samples = [high_confidence, low_confidence, edge_case]
for sample in samples:
explanation = explainer.explain_prediction(sample)
Create: local-explanations/ directory
Create explainability-report.md:
# Model Explainability Report
## Global Feature Importance
[Top 10 features with importance scores]
## SHAP Analysis
[Summary plot and interpretation]
## Partial Dependence
[How each feature affects predictions]
## Example Explanations
[3-5 example predictions with full explanations]
## Recommendations
[Model improvements based on feature analysis]
Report: