From mlx
Fine-tune language models with LoRA, QLoRA, or full fine-tuning. Covers unsloth (4x memory reduction), PEFT, trl SFTTrainer, DPO, instruction tuning with chat templates, dataset preparation, and evaluation. Use when fine-tuning any HuggingFace model on custom data.
npx claudepluginhub damionrashford/mlx --plugin mlxThis skill is limited to using the following tools:
Fine-tune language models efficiently with LoRA, QLoRA, or unsloth.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Fine-tune language models efficiently with LoRA, QLoRA, or unsloth.
# Prepare dataset
uv run ${CLAUDE_SKILL_DIR}/scripts/prepare_dataset.py data/raw.csv --format alpaca --output data/train.jsonl
# Fine-tune with LoRA (unsloth)
python3 -c "
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained('mistralai/Mistral-7B-v0.1', load_in_4bit=True)
model = FastLanguageModel.get_peft_model(model, r=16, lora_alpha=32, target_modules=['q_proj','v_proj'])
# ... training loop
"
scripts/prepare_dataset.pymodel.merge_and_unload() for deploymentSee references/fine-tune-guide.md for complete LoRA/QLoRA/unsloth documentation.