From together-pack
Together AI core workflow a for inference, fine-tuning, and model deployment. Use when working with Together AI's OpenAI-compatible API. Trigger: "together core workflow a".
npx claudepluginhub flight505/skill-forge --plugin together-packThis skill is limited to using the following tools:
Fine-tune open-source models on your data with Together AI's fine-tuning API.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Fine-tune open-source models on your data with Together AI's fine-tuning API.
import json
# Format: one JSON object per line with messages array
training_data = [
{"messages": [
{"role": "system", "content": "You are a customer support agent."},
{"role": "user", "content": "How do I reset my password?"},
{"role": "assistant", "content": "Go to Settings > Security > Reset Password."},
]},
{"messages": [
{"role": "user", "content": "What are your business hours?"},
{"role": "assistant", "content": "We're open Monday-Friday, 9 AM - 5 PM EST."},
]},
]
with open("training.jsonl", "w") as f:
for item in training_data:
f.write(json.dumps(item) + "\n")
from together import Together
client = Together()
# Upload file
file = client.files.upload(file="training.jsonl")
print(f"File ID: {file.id}")
job = client.fine_tuning.create(
training_file=file.id,
model="meta-llama/Llama-3.3-70B-Instruct-Turbo",
n_epochs=3,
learning_rate=1e-5,
batch_size=4,
suffix="my-support-bot",
)
print(f"Job ID: {job.id}, Status: {job.status}")
import time
while True:
status = client.fine_tuning.retrieve(job.id)
print(f"Status: {status.status}, Step: {status.training_steps_completed}")
if status.status in ("completed", "failed", "cancelled"):
break
time.sleep(30)
if status.status == "completed":
print(f"Fine-tuned model: {status.fine_tuned_model}")
response = client.chat.completions.create(
model=status.fine_tuned_model, # Your custom model ID
messages=[{"role": "user", "content": "How do I cancel my subscription?"}],
)
print(response.choices[0].message.content)
| Error | Cause | Solution |
|---|---|---|
| Invalid JSONL | Wrong format | Each line must be valid JSON with messages array |
| Training OOM | Batch size too large | Reduce batch_size |
| Job failed | Data quality issue | Check training file format |
For batch inference and dedicated endpoints, see together-core-workflow-b.