From chrisvoncsefalvay-funsloth
Generate Unsloth training notebooks and scripts. Use when the user wants to create a training notebook, configure fine-tuning parameters, or set up SFT/DPO/GRPO training.
npx claudepluginhub joshuarweaver/cascade-ai-ml-engineering --plugin chrisvoncsefalvay-funslothThis skill uses the workspace's default tool permissions.
Generate training notebooks for fine-tuning with Unsloth.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Generate training notebooks for fine-tuning with Unsloth.
Copy and customize the template notebook:
notebooks/sft_template.ipynb
Or use a training script directly:
python scripts/train_sft.py # Supervised fine-tuning
python scripts/train_dpo.py # Direct preference optimization
python scripts/train_grpo.py # Group relative policy optimization
Ask the user which mode they prefer:
Use these production-ready defaults:
| Parameter | Default | Reasoning |
|---|---|---|
| Model | unsloth/llama-3.1-8b-unsloth-bnb-4bit | Good balance |
| Max seq length | 2048 | Covers most use cases |
| Load in 4-bit | True | 70% VRAM reduction |
| LoRA rank | 16 | Good trade-off |
| Batch size | 2 | Works on 8GB+ VRAM |
| Gradient accumulation | 4 | Effective batch of 8 |
| Learning rate | 2e-4 | Unsloth recommended |
| Epochs | 1 | Often sufficient |
Ask questions in order. See MODEL_SELECTION.md for model options and TRAINING_METHODS.md for technique details.
Generate a notebook with interactive configuration widgets. Users select options at runtime.
Generate notebooks with these sections:
Ask where to run training:
funsloth-hfjobs)funsloth-runpod)funsloth-local)notebook_path: "./training_notebook.ipynb"
model_name: "unsloth/llama-3.1-8b-unsloth-bnb-4bit"
dataset_name: "mlabonne/FineTome-100k"
technique: "SFT"
lora_rank: 16
max_seq_length: 2048
batch_size: 2
learning_rate: 2e-4
num_epochs: 1