Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4x memory reduction with <2% perplexity degradation, or for faster inference (3-4x speedup) vs FP16. Integrates with transformers and PEFT for QLoRA fine-tuning.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install gptq@zechenzhangAGI/AI-research-SKILLs