Fine-tune LLMs using reinforcement learning with TRL - SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, and reward model training. Use when need RLHF, align model with preferences, or train from human feedback. Works with HuggingFace Transformers.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install fine-tuning-with-trl@zechenzhangAGI/AI-research-SKILLs