Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install peft-fine-tuning@zechenzhangAGI/AI-research-SKILLs