From jeremylongshore-claude-code-plugins-plus-skills
Builds model evaluation metrics operations for ML training. Provides step-by-step guidance, production-ready code, and best practices for PyTorch, TensorFlow, and scikit-learn workflows.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin langchain-py-packThis skill is limited to using the following tools:
This skill provides automated assistance for model evaluation metrics tasks within the ML Training domain.
Evaluates machine learning models using metrics like accuracy, precision, recall, F1-score via model-evaluation-suite plugin. Useful for performance analysis, validation, model comparison, and optimization.
Builds model explainability tool operations for ML training, including data preparation, model training, hyperparameter tuning, and experiment tracking. Activates on 'model explainability tool' phrases.
Assesses ML pipeline stage and applies patterns for data pipelines, model training, serving, MLOps, evaluation, and debugging with validations like schema checks, drift detection, and skew guards.
Share bugs, ideas, or general feedback.
This skill provides automated assistance for model evaluation metrics tasks within the ML Training domain.
This skill activates automatically when you:
Example: Basic Usage Request: "Help me with model evaluation metrics" Result: Provides step-by-step guidance and generates appropriate configurations
| Error | Cause | Solution |
|---|---|---|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |
Part of the ML Training skill category. Tags: ml, training, pytorch, tensorflow, sklearn