Llama Factory
Skill Verifiziert AktivExpert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support
To serve as an expert guide and documentation resource for users working with LLaMA-Factory, enabling efficient and effective fine-tuning of LLMs with a no-code approach.
Funktionen
- Expert guidance for LLaMA-Factory
- No-code WebUI for LLM fine-tuning
- Support for 100+ models
- 2/3/4/5/6/8-bit QLoRA
- Multimodal support
- Comprehensive reference documentation
Anwendungsfälle
- Learning LLaMA-Factory features and APIs
- Implementing LLaMA-Factory solutions
- Debugging LLaMA-Factory code
- Understanding fine-tuning best practices
Nicht-Ziele
- Directly executing LLM fine-tuning tasks (provides guidance instead)
- Replacing the LLaMA-Factory framework itself
- Covering fine-tuning frameworks other than LLaMA-Factory
Trust
- info:Issues AttentionThere are 17 open issues and 4 closed issues in the last 90 days, indicating a closure rate of approximately 19%, which suggests maintainers might be slow to respond to a moderate volume of issues.
Installation
npx skills add davila7/claude-code-templatesFührt das Vercel skills CLI (skills.sh) via npx aus — benötigt Node.js lokal und mindestens einen installierten skills-kompatiblen Agent (Claude Code, Cursor, Codex, …). Setzt voraus, dass das Repo dem agentskills.io-Format folgt.
Qualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
Llama Factory
95Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support
Implementing Llms Litgpt
100Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.
Unsloth
100Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization
Peft Fine Tuning
99Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.
Unsloth
98Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization
Implementing Llms Litgpt
98Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.