npx claudepluginhub diegouis/provectus-marketplace --plugin proagent-ml-aiWant just this skill?
Then install: npx claudepluginhub u/[userId]/[slug]
Building ML & AI Systems - model training, inference optimization, MLOps pipelines, experiment tracking, prompt engineering, embeddings, vector stores, LLM application development, RAG systems, knowledge graph integration (Graphiti), meta-prompting frameworks, LLM judge evaluation, and AWS AI services (SageMaker, Bedrock). Use when performing any machine learning, deep learning, or AI engineering task.
This skill uses the workspace's default tool permissions.
references/deployment-patterns.mdreferences/evaluation-tracking.mdreferences/llm-patterns.mdreferences/pipeline-workflows.mdreferences/training-patterns.mdBuilding ML & AI Systems
Comprehensive ML and AI skill covering model training, evaluation, deployment, monitoring, LLM application development, RAG systems, and knowledge graph integration.
When to Use This Skill
- Training and evaluating machine learning models (classification, regression, ranking)
- Engineering features and tuning hyperparameters
- Tracking experiments with MLflow or Weights & Biases
- Deploying models via REST APIs, SageMaker, or Bedrock
- Building LLM-powered applications with RAG and prompt engineering
- Creating embeddings and managing vector stores
- Monitoring model performance and detecting data drift
- Building knowledge graphs with Graphiti
- Evaluating AI outputs using LLM judge patterns
- Designing ML pipeline validation workflows
When Invoked Without Clear Intent
MANDATORY: You MUST call the AskUserQuestion tool — do NOT render these options as text:
AskUserQuestion( header: "ML/AI", question: "What ML/AI topic do you need help with?", options: [ { label: "Model Training", description: "Scikit-learn, TensorFlow, XGBoost, feature engineering" }, { label: "Evaluation & Tracking", description: "Metrics, MLflow, W&B, experiment tracking" }, { label: "Model Deployment", description: "FastAPI serving, SageMaker, batch prediction, drift monitoring" }, { label: "LLM & RAG", description: "RAG systems, prompt engineering, embeddings, vector stores, LLM judge" } ] )
If the user selects "Other", offer: ML Pipeline Validation (project structure, validation gates).
Reference Routing
CONTEXT GUARD: Load reference files only when the user's request matches a specific topic below. Do NOT load all references upfront.
| User Intent | Reference File |
|---|---|
| Model training, data splitting, scikit-learn, TensorFlow, XGBoost, feature engineering | references/training-patterns.md |
| Model evaluation, metrics, classification report, MLflow, W&B, experiment tracking | references/evaluation-tracking.md |
| Model deployment, FastAPI serving, SageMaker, batch prediction, data drift monitoring | references/deployment-patterns.md |
| RAG systems, prompt engineering, embeddings, vector stores, Graphiti, LLM judge, Bedrock, LangSmith | references/llm-patterns.md |
| ML pipeline validation, project structure, common pitfalls, validation gates | references/pipeline-workflows.md |
Visual Diagramming with Excalidraw
Use the Excalidraw MCP server to generate ML pipeline diagrams, RAG topology maps, experiment DAGs, and model deployment architecture visualizations. Describe what you need in natural language.
Similar Skills
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.