Llama Cpp
Skill ActiveRuns LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU.
To enable efficient and accessible LLM inference on hardware lacking NVIDIA GPUs, making local and edge LLM deployments feasible.
Features
- CPU-only inference
- Apple Silicon (M1/M2/M3) optimization
- AMD/Intel GPU support (non-CUDA)
- GGUF quantization (1.5-8 bit)
- OpenAI-compatible API server mode
Use Cases
- Running LLMs on personal Macs or Linux machines
- Edge deployments on resource-constrained devices
- Local LLM development and testing without GPU hardware
- Utilizing models when CUDA is unavailable
Non-Goals
- Maximizing throughput on high-end NVIDIA GPUs
- Providing a Python-first API like vLLM or TensorRT-LLM
- Managing cloud infrastructure for LLM serving
Trust
- warning:Issues AttentionIn the last 90 days, 17 issues were opened and 4 were closed, indicating a low closure rate (approx. 23.5%) and potentially slow maintainer response.
Installation
npx skills add davila7/claude-code-templatesRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
Trust Signals
Similar Extensions
Llama Cpp
95Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU.
GGUF Quantization
95GGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when needing flexible quantization from 2-8 bit without GPU requirements.
GGUF Quantization
98GGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when needing flexible quantization from 2-8 bit without GPU requirements.
VLLM High Performance LLM Serving
97Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.
Hugging Face Local Models
95Use to select models to run locally with llama.cpp and GGUF on CPU, Mac Metal, CUDA, or ROCm. Covers finding GGUFs, quant selection, running servers, exact GGUF file lookup, conversion, and OpenAI-compatible local serving.
Cli Anything Quietshrink
99Compress macOS screen recordings with zero CPU stress using Apple Silicon's hardware HEVC encoder. Typically reduces file size 70-90% while staying visually lossless. Computer stays silent during encoding.