GGUF Quantization
Skill Verified ActiveGGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when needing flexible quantization from 2-8 bit without GPU requirements.
To guide users through the process of preparing and running AI models using GGUF format and llama.cpp for efficient inference on various hardware.
Features
- GGUF format conversion and quantization
- llama.cpp build and usage instructions
- Detailed quantization type explanations
- Python bindings and server mode examples
- Hardware-specific optimization guides (CPU, Metal, CUDA)
Use Cases
- Deploying LLMs on consumer hardware with limited VRAM
- Running models efficiently on Apple Silicon with Metal acceleration
- Achieving flexible quantization from 2-8 bit without GPU requirements
- Integrating llama.cpp into custom applications or workflows
Non-Goals
- Providing pre-quantized models directly
- Covering other quantization formats like AWQ or GPTQ
- Detailed LLM architecture explanations beyond inference
Workflow
- Install llama.cpp and its dependencies.
- Convert a HuggingFace model to GGUF format.
- Quantize the GGUF model to a desired bit precision.
- Run inference using the quantized model via CLI, Python, or server.
Prerequisites
- llama.cpp build environment (compiler, make)
- Python 3.8+
- HuggingFace models (for conversion)
Installation
First, add the marketplace
/plugin marketplace add Orchestra-Research/AI-Research-SKILLs/plugin install AI-Research-SKILLs@ai-research-skillsQuality Score
VerifiedTrust Signals
Similar Extensions
GGUF Quantization
95GGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when needing flexible quantization from 2-8 bit without GPU requirements.
Llama Cpp
95Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU.
Llama Cpp
85Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU.
Hugging Face Local Models
95Use to select models to run locally with llama.cpp and GGUF on CPU, Mac Metal, CUDA, or ROCm. Covers finding GGUFs, quant selection, running servers, exact GGUF file lookup, conversion, and OpenAI-compatible local serving.
Huggingface Llm Trainer
99Train or fine-tune language and vision models using TRL (Transformer Reinforcement Learning) or Unsloth with Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, model selection/leaderboards and model persistence. Use for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.
Vector Index Tuning
99Optimize vector index performance for latency, recall, and memory. Use when tuning HNSW parameters, selecting quantization strategies, or scaling vector search infrastructure.