Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

Llama Factory

Skill Verifiziert Aktiv

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

Zweck

To serve as an expert guide and documentation resource for users working with LLaMA-Factory, enabling efficient and effective fine-tuning of LLMs with a no-code approach.

Funktionen

  • Expert guidance for LLaMA-Factory
  • No-code WebUI for LLM fine-tuning
  • Support for 100+ models
  • 2/3/4/5/6/8-bit QLoRA
  • Multimodal support
  • Comprehensive reference documentation

Anwendungsfälle

  • Learning LLaMA-Factory features and APIs
  • Implementing LLaMA-Factory solutions
  • Debugging LLaMA-Factory code
  • Understanding fine-tuning best practices

Nicht-Ziele

  • Directly executing LLM fine-tuning tasks (provides guidance instead)
  • Replacing the LLaMA-Factory framework itself
  • Covering fine-tuning frameworks other than LLaMA-Factory

Trust

  • info:Issues AttentionThere are 17 open issues and 4 closed issues in the last 90 days, indicating a closure rate of approximately 19%, which suggests maintainers might be slow to respond to a moderate volume of issues.

Installation

npx skills add davila7/claude-code-templates

Führt das Vercel skills CLI (skills.sh) via npx aus — benötigt Node.js lokal und mindestens einen installierten skills-kompatiblen Agent (Claude Code, Cursor, Codex, …). Setzt voraus, dass das Repo dem agentskills.io-Format folgt.

Qualitätspunktzahl

Verifiziert
96 /100
Analysiert 1 day ago

Vertrauenssignale

Letzter Commit1 day ago
Sterne27.2k
LizenzMIT
Status
Quellcode ansehen

Ähnliche Erweiterungen

Llama Factory

95

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

Skill
Orchestra-Research

Implementing Llms Litgpt

100

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

Skill
davila7

Unsloth

100

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

Skill
davila7

Peft Fine Tuning

99

Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.

Skill
Orchestra-Research

Unsloth

98

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

Skill
Orchestra-Research

Implementing Llms Litgpt

98

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

Skill
Orchestra-Research