Skip to main content

Model Merging

Skill Verified Active

Merge multiple fine-tuned models using mergekit to combine capabilities without retraining. Use when creating specialized models by blending domain-specific expertise (math + coding + chat), improving performance beyond single models, or experimenting rapidly with model variants. Covers SLERP, TIES-Merging, DARE, Task Arithmetic, linear merging, and production deployment strategies.

Purpose

Merge multiple fine-tuned models to combine capabilities without retraining, enabling the creation of specialized models and rapid experimentation.

Features

  • Merge fine-tuned models without retraining
  • Supports various merge methods: SLERP, TIES, DARE, Task Arithmetic, Linear
  • Provides configuration examples for different model architectures (Mistral, Llama, Mixtral)
  • Includes guidance on evaluation, production deployment, and common pitfalls

Use Cases

  • Creating specialized models by blending domain-specific expertise (e.g., math + coding + chat)
  • Improving model performance beyond single models
  • Experimenting rapidly with model variants in minutes
  • Reducing training costs by avoiding full retraining

Non-Goals

  • Full model retraining
  • General LLM training workflows
  • Deployment outside of model artifact generation

Trust

  • info:Issues AttentionThere are 17 open issues and 4 closed issues in the last 90 days, indicating a closure rate below 50% and a moderate number of ongoing discussions.

Installation

npx skills add davila7/claude-code-templates

Runs the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.

Quality Score

Verified
98 /100
Analyzed 1 day ago

Trust Signals

Last commit1 day ago
Stars27.2k
LicenseMIT
Status
View Source

Similar Extensions

Model Merging

98

Merge multiple fine-tuned models using mergekit to combine capabilities without retraining. Use when creating specialized models by blending domain-specific expertise (math + coding + chat), improving performance beyond single models, or experimenting rapidly with model variants. Covers SLERP, TIES-Merging, DARE, Task Arithmetic, linear merging, and production deployment strategies.

Skill
Orchestra-Research

Implementing Llms Litgpt

100

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

Skill
davila7

Unsloth

100

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

Skill
davila7

Huggingface Llm Trainer

99

Train or fine-tune language and vision models using TRL (Transformer Reinforcement Learning) or Unsloth with Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, model selection/leaderboards and model persistence. Use for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.

Skill
huggingface

Chat Format

100

Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval

Skill
ruvnet

Oh My Claudecode

100

Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly

Skill
Yeachan-Heo

© 2025 SkillRepo · Find the right skill, skip the noise.