Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

Llama Factory

Skill Verifiziert Aktiv

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

Zweck

To empower users to efficiently fine-tune LLMs with LLaMA-Factory through expert guidance, a no-code WebUI, and comprehensive documentation, enabling advanced model customization without deep coding.

Funktionen

  • No-code WebUI for LLM fine-tuning
  • Support for 100+ models including LLaMA, LLaVA, Mistral, Qwen, Gemma
  • Various fine-tuning techniques: LoRA, QLoRA (2-8 bit), full parameter tuning
  • Comprehensive documentation covering installation, usage, advanced features, and troubleshooting
  • Support for NPU training and inference

Anwendungsfälle

  • Fine-tuning LLMs with custom datasets using LLaMA-Factory's WebUI
  • Implementing advanced fine-tuning strategies like QLoRA or LoRA+
  • Setting up and running inference with fine-tuned models
  • Understanding and configuring distributed training for large models

Nicht-Ziele

  • Providing a standalone LLM inference service
  • Implementing novel LLM architectures
  • Offering a platform for model hosting or deployment beyond configuration guidance

Installation

Zuerst Marketplace hinzufügen

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

Qualitätspunktzahl

Verifiziert
95 /100
Analysiert 1 day ago

Vertrauenssignale

Letzter Commit17 days ago
Sterne8.3k
LizenzMIT
Status
Quellcode ansehen

Ähnliche Erweiterungen

Llama Factory

96

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

Skill
davila7

Implementing Llms Litgpt

100

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

Skill
davila7

Unsloth

100

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

Skill
davila7

Ray Train

99

Distributed training orchestration across clusters. Scales PyTorch/TensorFlow/HuggingFace from laptop to 1000s of nodes. Built-in hyperparameter tuning with Ray Tune, fault tolerance, elastic scaling. Use when training massive models across multiple machines or running distributed hyperparameter sweeps.

Skill
Orchestra-Research

Huggingface Accelerate

99

Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.

Skill
davila7

OpenPI Fine Tuning and Serving

98

Fine-tune and serve Physical Intelligence OpenPI models (pi0, pi0-fast, pi0.5) using JAX or PyTorch backends for robot policy inference across ALOHA, DROID, and LIBERO environments. Use when adapting pi0 models to custom datasets, converting JAX checkpoints to PyTorch, running policy inference servers, or debugging norm stats and GPU memory issues.

Skill
Orchestra-Research