Skip to main content

Openrlhf Training

Skill Verified Active

High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models (7B-70B+). Built on Ray, vLLM, ZeRO-3. 2× faster than DeepSpeedChat with distributed architecture and GPU resource sharing.

Purpose

To enable efficient and high-performance Reinforcement Learning from Human Feedback (RLHF) training for large language models using a distributed architecture with advanced acceleration techniques.

Features

  • High-performance RLHF training framework
  • Support for PPO, GRPO, RLOO, DPO algorithms
  • Ray + vLLM acceleration for large models (7B-70B+)
  • Distributed architecture with multi-node GPU cluster support
  • Hybrid Engine for GPU resource sharing

Use Cases

  • Training large language models with RLHF
  • Fine-tuning models on custom reward functions
  • Leveraging distributed computing for faster training
  • Accelerating inference during RLHF rollout phases

Non-Goals

  • Single-node or basic model fine-tuning
  • Environments without GPU acceleration capabilities
  • Inference-only model serving outside of the training loop

Installation

npx skills add davila7/claude-code-templates

Runs the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.

Quality Score

Verified
97 /100
Analyzed about 18 hours ago

Trust Signals

Last commitabout 20 hours ago
Stars27.2k
LicenseMIT
Status
View Source

Similar Extensions

Openrlhf Training

99

High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models (7B-70B+). Built on Ray, vLLM, ZeRO-3. 2× faster than DeepSpeedChat with distributed architecture and GPU resource sharing.

Skill
Orchestra-Research

Verl Rl Training

99

Provides guidance for training LLMs with reinforcement learning using verl (Volcano Engine RL). Use when implementing RLHF, GRPO, PPO, or other RL algorithms for LLM post-training at scale with flexible infrastructure backends.

Skill
Orchestra-Research

Moe Training

98

Train Mixture of Experts (MoE) models using DeepSpeed or HuggingFace. Use when training large-scale models with limited compute (5× cost reduction vs dense models), implementing sparse architectures like Mixtral 8x7B or DeepSeek-V3, or scaling model capacity without proportional compute increase. Covers MoE architectures, routing mechanisms, load balancing, expert parallelism, and inference optimization.

Skill
davila7

Ray Data

95

Scalable data processing for ML workloads. Streaming execution across CPU/GPU, supports Parquet/CSV/JSON/images. Integrates with Ray Train, PyTorch, TensorFlow. Scales from single machine to 100s of nodes. Use for batch inference, data preprocessing, multi-modal data loading, or distributed ETL pipelines.

Skill
Orchestra-Research

Verl Rl Training

95

Provides guidance for training LLMs with reinforcement learning using verl (Volcano Engine RL). Use when implementing RLHF, GRPO, PPO, or other RL algorithms for LLM post-training at scale with flexible infrastructure backends.

Skill
davila7

PyTorch Lightning

100

Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.

Skill
K-Dense-AI

© 2025 SkillRepo · Find the right skill, skip the noise.