Skip to main content

Llama Factory

Skill Verified Active

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

Purpose

To empower users to efficiently fine-tune LLMs with LLaMA-Factory through expert guidance, a no-code WebUI, and comprehensive documentation, enabling advanced model customization without deep coding.

Features

  • No-code WebUI for LLM fine-tuning
  • Support for 100+ models including LLaMA, LLaVA, Mistral, Qwen, Gemma
  • Various fine-tuning techniques: LoRA, QLoRA (2-8 bit), full parameter tuning
  • Comprehensive documentation covering installation, usage, advanced features, and troubleshooting
  • Support for NPU training and inference

Use Cases

  • Fine-tuning LLMs with custom datasets using LLaMA-Factory's WebUI
  • Implementing advanced fine-tuning strategies like QLoRA or LoRA+
  • Setting up and running inference with fine-tuned models
  • Understanding and configuring distributed training for large models

Non-Goals

  • Providing a standalone LLM inference service
  • Implementing novel LLM architectures
  • Offering a platform for model hosting or deployment beyond configuration guidance

Installation

First, add the marketplace

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

Quality Score

Verified
95 /100
Analyzed 1 day ago

Trust Signals

Last commit17 days ago
Stars8.3k
LicenseMIT
Status
View Source

Similar Extensions

Llama Factory

96

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

Skill
davila7

Implementing Llms Litgpt

100

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

Skill
davila7

Unsloth

100

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

Skill
davila7

Ray Train

99

Distributed training orchestration across clusters. Scales PyTorch/TensorFlow/HuggingFace from laptop to 1000s of nodes. Built-in hyperparameter tuning with Ray Tune, fault tolerance, elastic scaling. Use when training massive models across multiple machines or running distributed hyperparameter sweeps.

Skill
Orchestra-Research

Huggingface Accelerate

99

Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.

Skill
davila7

OpenPI Fine Tuning and Serving

98

Fine-tune and serve Physical Intelligence OpenPI models (pi0, pi0-fast, pi0.5) using JAX or PyTorch backends for robot policy inference across ALOHA, DROID, and LIBERO environments. Use when adapting pi0 models to custom datasets, converting JAX checkpoints to PyTorch, running policy inference servers, or debugging norm stats and GPU memory issues.

Skill
Orchestra-Research

© 2025 SkillRepo · Find the right skill, skip the noise.