From proagent-ml-ai
Building ML & AI Systems - model training, inference optimization, MLOps pipelines, experiment tracking, prompt engineering, embeddings, vector stores, LLM application development, RAG systems, knowledge graph integration (Graphiti), meta-prompting frameworks, LLM judge evaluation, and AWS AI services (SageMaker, Bedrock). Use when performing any machine learning, deep learning, or AI engineering task.
npx claudepluginhub diegouis/provectus-marketplace --plugin proagent-ml-aiThis skill uses the workspace's default tool permissions.
Comprehensive ML and AI skill covering model training, evaluation, deployment, monitoring, LLM application development, RAG systems, and knowledge graph integration.
Generates validated, runnable implementation plans for ML pipelines, architecture designs, and multi-step projects grounded in official framework documentation.
Dispatches to ML/AI sub-skills for LLM integrations, prompt engineering with evals, model pipelines, performance evaluations, RAG, and system inventory. Use for AI engineering tasks.
Orchestrates AI/ML workflows for LLM app development, RAG implementation, agent architecture, ML pipelines, and AI features. Use for production AI systems including design, integration, and observability.
Share bugs, ideas, or general feedback.
Comprehensive ML and AI skill covering model training, evaluation, deployment, monitoring, LLM application development, RAG systems, and knowledge graph integration.
MANDATORY: You MUST call the AskUserQuestion tool — do NOT render these options as text:
AskUserQuestion( header: "ML/AI", question: "What ML/AI topic do you need help with?", options: [ { label: "Model Training", description: "Scikit-learn, TensorFlow, XGBoost, feature engineering" }, { label: "Evaluation & Tracking", description: "Metrics, MLflow, W&B, experiment tracking" }, { label: "Model Deployment", description: "FastAPI serving, SageMaker, batch prediction, drift monitoring" }, { label: "LLM & RAG", description: "RAG systems, prompt engineering, embeddings, vector stores, LLM judge" } ] )
If the user selects "Other", offer: ML Pipeline Validation (project structure, validation gates).
CONTEXT GUARD: Load reference files only when the user's request matches a specific topic below. Do NOT load all references upfront.
| User Intent | Reference File |
|---|---|
| Model training, data splitting, scikit-learn, TensorFlow, XGBoost, feature engineering | references/training-patterns.md |
| Model evaluation, metrics, classification report, MLflow, W&B, experiment tracking | references/evaluation-tracking.md |
| Model deployment, FastAPI serving, SageMaker, batch prediction, data drift monitoring | references/deployment-patterns.md |
| RAG systems, prompt engineering, embeddings, vector stores, Graphiti, LLM judge, Bedrock, LangSmith | references/llm-patterns.md |
| ML pipeline validation, project structure, common pitfalls, validation gates | references/pipeline-workflows.md |
Use the Excalidraw MCP server to generate ML pipeline diagrams, RAG topology maps, experiment DAGs, and model deployment architecture visualizations. Describe what you need in natural language.