跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

Huggingface Llm Trainer

插件 已验证 活跃

Train or fine-tune language models using TRL on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes hardware selection, cost estimation, Trackio monitoring, and Hub persistence.

目的

Enables AI agents to easily train and fine-tune large language models on powerful cloud infrastructure without requiring local setup or complex configuration.

功能

  • Train/fine-tune LLMs with TRL (SFT, DPO, GRPO)
  • Utilize Hugging Face Jobs for cloud GPU training
  • Convert models to GGUF for local deployment
  • Estimate training cost and hardware requirements
  • Integrate Trackio for real-time monitoring

使用场景

  • Fine-tuning LLMs for specific tasks on cloud GPUs
  • Experimenting with different TRL training methods
  • Converting trained models to GGUF for local inference
  • Estimating the cost and time for LLM training jobs

非目标

  • Performing training directly on the user's local machine
  • Managing Hugging Face Hub repositories beyond saving trained models
  • Providing a UI for model training; all interactions are agent-driven

安装

请先添加 Marketplace

/plugin marketplace add huggingface/skills
/plugin install huggingface-llm-trainer@huggingface-skills

质量评分

已验证
99 /100
1 day ago 分析

信任信号

最近提交2 days ago
星标10.5k
许可证Apache-2.0
状态
查看源代码