Fast structured generation and serving for LLMs with RadixAttention prefix caching. Use for JSON/regex outputs, constrained decoding, agentic workflows with tool calls, or when you need 5x faster inference than vLLM with prefix sharing. Powers 300,000+ GPUs at xAI, AMD, NVIDIA, and LinkedIn.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install sglang@zechenzhangAGI/AI-research-SKILLs