VLLM Inference Serving
Skill AktivServes LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.
To enable efficient, high-throughput deployment of Large Language Models for production APIs and applications, especially when optimizing for latency, throughput, or limited GPU memory.
Funktionen
- High-throughput LLM serving with vLLM
- Optimized inference latency and throughput
- Support for limited GPU memory scenarios
- OpenAI-compatible API endpoints
- Quantization support (GPTQ, AWQ, FP8)
Anwendungsfälle
- Deploying production-ready LLM APIs
- Optimizing inference performance for cost and speed
- Serving large language models on resource-constrained hardware
- Building applications that require low-latency, high-concurrency LLM interactions
Nicht-Ziele
- Training or fine-tuning LLMs
- Providing a general-purpose Python inference library outside of vLLM's scope
- Serving models without NVIDIA GPUs (primary focus)
- Managing the entire cloud infrastructure for LLM deployment
Voraussetzungen
- NVIDIA GPU with appropriate VRAM
- CUDA toolkit installed
- Python environment
Trust
- warning:Issues AttentionThere are 17 open issues and 4 closed issues in the last 90 days, indicating a low closure rate and potentially slow maintainer response.
Installation
npx skills add davila7/claude-code-templatesFührt das Vercel skills CLI (skills.sh) via npx aus — benötigt Node.js lokal und mindestens einen installierten skills-kompatiblen Agent (Claude Code, Cursor, Codex, …). Setzt voraus, dass das Repo dem agentskills.io-Format folgt.
Qualitätspunktzahl
Vertrauenssignale
Ähnliche Erweiterungen
Hqq Quantization
98Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datasets, for fast quantization workflows, or when deploying with vLLM or HuggingFace Transformers.
VLLM High Performance LLM Serving
97Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.
Hqq Quantization
96Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datasets, for fast quantization workflows, or when deploying with vLLM or HuggingFace Transformers.
AWQ Quantization
95Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.
PyMC Bayesian Modeling
99Bayesian modeling with PyMC. Build hierarchical models, MCMC (NUTS), variational inference, LOO/WAIC comparison, posterior checks, for probabilistic programming and inference.
LLM Models via OpenRouter
99Access Claude, Gemini, Kimi, GLM and 100+ LLMs via inference.sh CLI using OpenRouter. Models: Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5, Gemini 3 Pro, Kimi K2, GLM-4.6, Intellect 3. One API for all models with automatic fallback and cost optimization. Use for: AI assistants, code generation, reasoning, agents, chat, content generation. Triggers: claude api, openrouter, llm api, claude sonnet, claude opus, gemini api, kimi, language model, gpt alternative, anthropic api, ai model api, llm access, chat api, claude alternative, openai alternative