From togetherai-skills
Guides LoRA, full fine-tuning, DPO preference tuning, VLM training, function-calling tuning, and reasoning tuning on Together AI using custom datasets to adapt models.
npx claudepluginhub togethercomputer/skillsThis skill uses the workspace's default tool permissions.
Use Together AI fine-tuning when the user needs to adapt a model to their own data or behavior.
Fine-tunes open-source models using Together AI's Python SDK and OpenAI-compatible API. Guides JSONL data prep, file upload, job creation, monitoring, and inference.
Guides LLM fine-tuning with LoRA/QLoRA/PEFT including dataset preparation, hyperparameter tuning, training, evaluation, and deployment.
Guides fine-tuning LLMs using LoRA/QLoRA/PEFT methods, dataset preparation in JSONL, hyperparameter tuning, evaluation metrics, and model deployment.
Share bugs, ideas, or general feedback.
Use Together AI fine-tuning when the user needs to adapt a model to their own data or behavior.
Supported workflows in this repo:
together-chat-completions for plain inference without trainingtogether-evaluations to measure a model before or after tuningtogether-dedicated-endpoints to host the resulting tuned modeltogether-gpu-clusters only when the user needs raw infrastructure rather than managed tuningtogether>=2.0.0). If the user is on an older version, they must upgrade first: uv pip install --upgrade "together>=2.0.0".