Skip to main content

Llama Factory

Skill Verified Active

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

Purpose

To serve as an expert guide and documentation resource for users working with LLaMA-Factory, enabling efficient and effective fine-tuning of LLMs with a no-code approach.

Features

  • Expert guidance for LLaMA-Factory
  • No-code WebUI for LLM fine-tuning
  • Support for 100+ models
  • 2/3/4/5/6/8-bit QLoRA
  • Multimodal support
  • Comprehensive reference documentation

Use Cases

  • Learning LLaMA-Factory features and APIs
  • Implementing LLaMA-Factory solutions
  • Debugging LLaMA-Factory code
  • Understanding fine-tuning best practices

Non-Goals

  • Directly executing LLM fine-tuning tasks (provides guidance instead)
  • Replacing the LLaMA-Factory framework itself
  • Covering fine-tuning frameworks other than LLaMA-Factory

Trust

  • info:Issues AttentionThere are 17 open issues and 4 closed issues in the last 90 days, indicating a closure rate of approximately 19%, which suggests maintainers might be slow to respond to a moderate volume of issues.

Installation

npx skills add davila7/claude-code-templates

Runs the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.

Quality Score

Verified
96 /100
Analyzed 1 day ago

Trust Signals

Last commit1 day ago
Stars27.2k
LicenseMIT
Status
View Source

Similar Extensions

Llama Factory

95

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

Skill
Orchestra-Research

Implementing Llms Litgpt

100

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

Skill
davila7

Unsloth

100

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

Skill
davila7

Peft Fine Tuning

99

Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.

Skill
Orchestra-Research

Unsloth

98

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

Skill
Orchestra-Research

Implementing Llms Litgpt

98

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

Skill
Orchestra-Research

© 2025 SkillRepo · Find the right skill, skip the noise.