Skip to main content

PEFT Fine Tuning

Skill Verified Active

Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.

Purpose

To enable users to fine-tune large language models efficiently on limited hardware by training only a small fraction of model parameters.

Features

  • Parameter-efficient fine-tuning (PEFT) with LoRA, QLoRA, and 25+ methods
  • Guidance for fine-tuning 7B-70B models on consumer GPUs
  • Code examples for standard LoRA and memory-efficient QLoRA
  • Detailed instructions on loading, merging, and managing adapters
  • Performance benchmarks and troubleshooting for common issues

Use Cases

  • Fine-tuning large LLMs (7B-70B) with limited GPU memory
  • Training less than 1% of model parameters for minimal accuracy loss
  • Enabling multi-adapter serving from a single base model
  • Rapid iteration on task-specific adapters for LLMs

Non-Goals

  • Full fine-tuning for maximum quality when compute budget is not a constraint
  • Training models smaller than 1B parameters
  • Scenarios requiring updating all model weights due to significant domain shift

Workflow

  1. Install necessary libraries (PEFT, Transformers, PyTorch, etc.)
  2. Configure LoRA or QLoRA parameters (rank, alpha, target modules)
  3. Load base model and apply PEFT configuration
  4. Prepare dataset and tokenize data
  5. Train the model using provided training arguments
  6. Save and optionally merge the trained adapter

Practices

  • Fine-tuning methodology
  • LLM optimization
  • Adapter management

Prerequisites

  • Python 3.8+
  • pip install peft transformers torch bitsandbytes datasets accelerate

Installation

npx skills add davila7/claude-code-templates

Runs the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.

Quality Score

Verified
96 /100
Analyzed about 20 hours ago

Trust Signals

Last commitabout 22 hours ago
Stars27.2k
LicenseMIT
Status
View Source

Similar Extensions

Peft Fine Tuning

99

Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.

Skill
Orchestra-Research

Fine Tuning Expert

98

Use when fine-tuning LLMs, training custom models, or adapting foundation models for specific tasks. Invoke for configuring LoRA/QLoRA adapters, preparing JSONL training datasets, setting hyperparameters for fine-tuning runs, adapter training, transfer learning, finetuning with Hugging Face PEFT, OpenAI fine-tuning, instruction tuning, RLHF, DPO, or quantizing and deploying fine-tuned models. Trigger terms include: LoRA, QLoRA, PEFT, finetuning, fine-tuning, adapter tuning, LLM training, model training, custom model.

Skill
jeffallan

Unsloth

98

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

Skill
Orchestra-Research

Implementing Llms Litgpt

98

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

Skill
Orchestra-Research

Implementing Llms Litgpt

100

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

Skill
davila7

Unsloth

100

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

Skill
davila7

© 2025 SkillRepo · Find the right skill, skip the noise.