VLLM High Performance LLM Serving
技能 已验证 活跃Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.
To enable users to deploy LLM APIs with high throughput and low latency using vLLM's advanced features for production environments.
功能
- High-throughput LLM serving
- Optimized inference latency
- Efficient memory usage with PagedAttention
- OpenAI-compatible API endpoint
- Support for quantization (AWQ, GPTQ, FP8)
- Tensor parallelism for distributed serving
使用场景
- Deploying production LLM APIs
- Optimizing inference latency and throughput
- Serving large models with limited GPU memory
- Building multi-user applications like chatbots
非目标
- CPU-based inference
- Research or prototyping with basic transformer implementations
- NVIDIA-only, maximum-performance inference (TensorRT-LLM is an alternative)
- Fine-tuning or training models
实践
- Production deployment
- Performance optimization
- Quantization
- Distributed serving
先决条件
- NVIDIA GPU with CUDA installed
- Python environment
- vLLM library installed
Execution
- info:Pinned dependenciesThe SKILL.md lists `dependencies: [vllm, torch, transformers]` but does not explicitly declare pinned interpreter versions or side-effect headers for any bundled scripts, although installation instructions point to `pip install vllm`.
安装
请先添加 Marketplace
/plugin marketplace add Orchestra-Research/AI-Research-SKILLs/plugin install AI-Research-SKILLs@ai-research-skills质量评分
已验证类似扩展
Tensorrt Llm
98Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.
Hqq Quantization
98Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datasets, for fast quantization workflows, or when deploying with vLLM or HuggingFace Transformers.
Hqq Quantization
96Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datasets, for fast quantization workflows, or when deploying with vLLM or HuggingFace Transformers.
Llama Cpp
95Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU.
VLLM Inference Serving
93Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.
Llama Cpp
85Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU.