Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install tensorrt-llm@zechenzhangAGI/AI-research-SKILLs