Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

Model Merging

Skill Verifiziert Aktiv

Merge multiple fine-tuned models using mergekit to combine capabilities without retraining. Use when creating specialized models by blending domain-specific expertise (math + coding + chat), improving performance beyond single models, or experimenting rapidly with model variants. Covers SLERP, TIES-Merging, DARE, Task Arithmetic, linear merging, and production deployment strategies.

Zweck

Combine capabilities from multiple LLMs to create specialized, higher-performing models efficiently and experiment rapidly with model variants.

Funktionen

  • Merge multiple fine-tuned models
  • Support for SLERP, TIES-Merging, DARE, Task Arithmetic, linear merging
  • Configuration examples for various merge methods
  • Guidance on production deployment and quantization
  • Combine domain-specific expertise without retraining

Anwendungsfälle

  • Creating specialized models by blending domain-specific expertise (math + coding + chat)
  • Improving performance beyond single models
  • Experimenting rapidly with model variants in minutes
  • Reducing training costs by merging instead of retraining

Nicht-Ziele

  • Retraining models from scratch
  • Training large language models
  • Evaluating merged models beyond the provided guidance
  • Developing new model merging techniques

Execution

  • info:Pinned dependenciesWhile dependencies are listed, specific version pinning or lockfiles are not explicitly detailed in the SKILL.md, relying on standard pip installation.

Installation

Zuerst Marketplace hinzufügen

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

Qualitätspunktzahl

Verifiziert
98 /100
Analysiert 1 day ago

Vertrauenssignale

Letzter Commit17 days ago
Sterne8.3k
LizenzMIT
Status
Quellcode ansehen

Ähnliche Erweiterungen

Model Merging

98

Merge multiple fine-tuned models using mergekit to combine capabilities without retraining. Use when creating specialized models by blending domain-specific expertise (math + coding + chat), improving performance beyond single models, or experimenting rapidly with model variants. Covers SLERP, TIES-Merging, DARE, Task Arithmetic, linear merging, and production deployment strategies.

Skill
davila7

Chat Format

100

Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval

Skill
ruvnet

Oh My Claudecode

100

Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly

Skill
Yeachan-Heo

Wrap Up Ritual

100

End-of-session ritual that audits changes, runs quality checks, captures learnings, and produces a session summary. Use when saying "wrap up", "done for the day", "finish coding", or ending a coding session.

Skill
rohitg00

Project Development

100

This skill should be used when the user asks to "start an LLM project", "design batch pipeline", "evaluate task-model fit", "structure agent project", or mentions pipeline architecture, agent-assisted development, cost estimation, or choosing between LLM and traditional approaches.

Skill
muratcankoylan

Context Compression

100

This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.

Skill
muratcankoylan