From experiment-tracking-setup
Sets up ML experiment tracking with MLflow or Weights & Biases: installs packages, initializes tools, and provides logging code for parameters, metrics, and artifacts.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin experiment-tracking-setupThis skill is limited to using the following tools:
Configure ML experiment tracking with MLflow or Weights & Biases, including environment setup and code for logging parameters, metrics, and artifacts.
Configures MLflow tracking setup for ML training workflows. Generates code, configurations, and best practices for experiment tracking with PyTorch, TensorFlow, scikit-learn.
Sets up MLflow tracking server, autologging for scikit-learn/PyTorch/TensorFlow/XGBoost, run comparisons with metrics/visuals, artifact management for reproducible ML workflows. For new projects, log migration, or CI/CD integration.
Onboards users to MLflow by analyzing codebase for GenAI (LLMs, LangChain) or traditional ML (sklearn, PyTorch) use cases and guiding through quickstart tutorials and integrations.
Share bugs, ideas, or general feedback.
Configure ML experiment tracking with MLflow or Weights & Biases, including environment setup and code for logging parameters, metrics, and artifacts.
This skill streamlines the process of setting up experiment tracking for machine learning projects. It automates environment configuration, tool initialization, and provides code examples to get you started quickly.
This skill activates when you need to:
User request: "track experiments using mlflow"
The skill will:
mlflow Python package.User request: "setup experiment tracking with wandb"
The skill will:
wandb Python package.This skill can be used in conjunction with other skills that generate or modify machine learning code, such as skills for model training or data preprocessing. It ensures that all experiments are properly tracked and documented.
The skill produces structured output relevant to the task.