Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install huggingface-accelerate@zechenzhangAGI/AI-research-SKILLs