Unsloth
Skill Verifiziert AktivExpert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization
To empower users with expert knowledge for optimizing LLM fine-tuning processes using Unsloth, achieving significantly faster training times and reduced memory usage.
Funktionen
- Fast fine-tuning guidance (2-5x faster training)
- Memory optimization guidance (50-80% less memory)
- LoRA and QLoRA optimization strategies
- Support for various LLMs and frameworks
- Comprehensive documentation and examples
Anwendungsfälle
- Fine-tuning LLMs with Unsloth for improved performance.
- Learning best practices for efficient LoRA/QLoRA implementations.
- Troubleshooting common issues in LLM fine-tuning processes.
- Accelerating research and development cycles through optimized training.
Nicht-Ziele
- Providing direct code execution for fine-tuning.
- Replacing official Unsloth documentation entirely, but rather serving as a curated guide.
- Covering fine-tuning methods outside of Unsloth's scope.
Installation
Zuerst Marketplace hinzufügen
/plugin marketplace add Orchestra-Research/AI-Research-SKILLs/plugin install AI-Research-SKILLs@ai-research-skillsQualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
Unsloth
100Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization
Peft Fine Tuning
99Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.
Implementing Llms Litgpt
98Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.
Fine Tuning Expert
98Use when fine-tuning LLMs, training custom models, or adapting foundation models for specific tasks. Invoke for configuring LoRA/QLoRA adapters, preparing JSONL training datasets, setting hyperparameters for fine-tuning runs, adapter training, transfer learning, finetuning with Hugging Face PEFT, OpenAI fine-tuning, instruction tuning, RLHF, DPO, or quantizing and deploying fine-tuned models. Trigger terms include: LoRA, QLoRA, PEFT, finetuning, fine-tuning, adapter tuning, LLM training, model training, custom model.
PEFT Fine Tuning
96Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.
Implementing Llms Litgpt
100Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.