跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

Llama Factory

技能 已验证 活跃

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

目的

To serve as an expert guide and documentation resource for users working with LLaMA-Factory, enabling efficient and effective fine-tuning of LLMs with a no-code approach.

功能

  • Expert guidance for LLaMA-Factory
  • No-code WebUI for LLM fine-tuning
  • Support for 100+ models
  • 2/3/4/5/6/8-bit QLoRA
  • Multimodal support
  • Comprehensive reference documentation

使用场景

  • Learning LLaMA-Factory features and APIs
  • Implementing LLaMA-Factory solutions
  • Debugging LLaMA-Factory code
  • Understanding fine-tuning best practices

非目标

  • Directly executing LLM fine-tuning tasks (provides guidance instead)
  • Replacing the LLaMA-Factory framework itself
  • Covering fine-tuning frameworks other than LLaMA-Factory

Trust

  • info:Issues AttentionThere are 17 open issues and 4 closed issues in the last 90 days, indicating a closure rate of approximately 19%, which suggests maintainers might be slow to respond to a moderate volume of issues.

安装

npx skills add davila7/claude-code-templates

通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。

质量评分

已验证
96 /100
1 day ago 分析

信任信号

最近提交1 day ago
星标27.2k
许可证MIT
状态
查看源代码

类似扩展

Llama Factory

95

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

技能
Orchestra-Research

Implementing Llms Litgpt

100

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

技能
davila7

Unsloth

100

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

技能
davila7

Peft Fine Tuning

99

Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.

技能
Orchestra-Research

Unsloth

98

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

技能
Orchestra-Research

Implementing Llms Litgpt

98

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

技能
Orchestra-Research