Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B parameters, need maximum GPU efficiency (47% MFU on H100), or require tensor/pipeline/sequence/context/expert parallelism. Production-ready framework used for Nemotron, LLaMA, DeepSeek.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install training-llms-megatron@zechenzhangAGI/AI-research-SKILLs