Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

Huggingface Llm Trainer

Plugin Verifiziert Aktiv

Train or fine-tune language models using TRL on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes hardware selection, cost estimation, Trackio monitoring, and Hub persistence.

Zweck

Enables AI agents to easily train and fine-tune large language models on powerful cloud infrastructure without requiring local setup or complex configuration.

Funktionen

  • Train/fine-tune LLMs with TRL (SFT, DPO, GRPO)
  • Utilize Hugging Face Jobs for cloud GPU training
  • Convert models to GGUF for local deployment
  • Estimate training cost and hardware requirements
  • Integrate Trackio for real-time monitoring

Anwendungsfälle

  • Fine-tuning LLMs for specific tasks on cloud GPUs
  • Experimenting with different TRL training methods
  • Converting trained models to GGUF for local inference
  • Estimating the cost and time for LLM training jobs

Nicht-Ziele

  • Performing training directly on the user's local machine
  • Managing Hugging Face Hub repositories beyond saving trained models
  • Providing a UI for model training; all interactions are agent-driven

Installation

Zuerst Marketplace hinzufügen

/plugin marketplace add huggingface/skills
/plugin install huggingface-llm-trainer@huggingface-skills

Qualitätspunktzahl

Verifiziert
99 /100
Analysiert about 18 hours ago

Vertrauenssignale

Letzter Commit2 days ago
Sterne10.5k
LizenzApache-2.0
Status
Quellcode ansehen