From jeremylongshore-claude-code-plugins-plus-skills
Assists with ONNX converter operations for ML deployment, providing step-by-step guidance, production-ready code, and configurations for model serving, inference, MLOps pipelines, monitoring, and optimization.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin langchain-py-packThis skill is limited to using the following tools:
This skill provides automated assistance for onnx converter tasks within the ML Deployment domain.
Builds model quantization tool operations for ML deployment. Provides step-by-step guidance, best practices, production-ready code, and configurations for MLOps, serving, inference, and monitoring.
Optimizes ML models for reduced size, faster inference, and edge deployment using quantization, pruning, knowledge distillation, ONNX export, and TensorRT.
Exports TensorFlow models to SavedModel and TensorFlow Lite with quantization, serving signatures for production and edge deployment.
Share bugs, ideas, or general feedback.
This skill provides automated assistance for onnx converter tasks within the ML Deployment domain.
This skill activates automatically when you:
Example: Basic Usage Request: "Help me with onnx converter" Result: Provides step-by-step guidance and generates appropriate configurations
| Error | Cause | Solution |
|---|---|---|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |
Part of the ML Deployment skill category. Tags: mlops, serving, inference, monitoring, production