Multi-GPU and multi-node training. Includes DeepSpeed (ZeRO stages), PyTorch FSDP (native sharding), Accelerate (4-line distributed), Megatron-Core (tensor/pipeline parallelism), PyTorch Lightning (Trainer class), and Ray Train (cluster orchestration). Use when training large models across GPUs.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install distributed-training@ai-research-skillsExpert guidance for Next.js Cache Components and Partial Prerendering (PPR). Proactively activates in projects with cacheComponents enabled.
Adds educational insights about implementation choices and codebase patterns (mimics the deprecated Explanatory output style)
Easily create hooks to prevent unwanted behaviors by analyzing conversation patterns
Frontend design skill for UI/UX implementation