跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

OpenPI Fine Tuning and Serving

技能 已验证 活跃

Fine-tune and serve Physical Intelligence OpenPI models (pi0, pi0-fast, pi0.5) using JAX or PyTorch backends for robot policy inference across ALOHA, DROID, and LIBERO environments. Use when adapting pi0 models to custom datasets, converting JAX checkpoints to PyTorch, running policy inference servers, or debugging norm stats and GPU memory issues.

目的

To enable researchers and engineers to adapt, train, and deploy OpenPI models for robot policy inference, streamlining complex ML workflows.

功能

  • Fine-tune pi0, pi0-fast, pi0.5 models
  • Support JAX and PyTorch backends
  • Convert JAX checkpoints to PyTorch
  • Serve policies via WebSocket API
  • Automated environment setup and dependency management

使用场景

  • Adapting pi0 models to custom datasets
  • Converting JAX checkpoints to PyTorch
  • Running policy inference servers for robot control
  • Debugging norm stats and GPU memory issues during training

非目标

  • Training or serving of models other than OpenPI variants
  • General-purpose machine learning framework utilities
  • Automated dataset curation or generation

工作流

  1. Set up environment (clone repo, sync dependencies)
  2. Select and configure training parameters
  3. Compute normalization statistics
  4. Launch training (JAX or PyTorch)
  5. Convert JAX checkpoints to PyTorch (if needed)
  6. Serve trained policies
  7. Integrate client into robot/simulation code

实践

  • Model Fine-Tuning
  • Policy Serving
  • JAX/PyTorch Development
  • Checkpoint Management

先决条件

  • JAX >= 0.4.30
  • PyTorch >= 2.1.0
  • Transformers >= 4.53.2
  • uv >= 0.4.0

安装

请先添加 Marketplace

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

质量评分

已验证
98 /100
1 day ago 分析

信任信号

最近提交17 days ago
星标8.3k
许可证MIT
状态
查看源代码

类似扩展

Implementing Llms Litgpt

100

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

技能
davila7

OpenVLA OFT Fine Tuning and Evaluation

98

Fine-tunes and evaluates OpenVLA-OFT and OpenVLA-OFT+ policies for robot action generation with continuous action heads, LoRA adaptation, and FiLM conditioning on LIBERO simulation and ALOHA real-world setups. Use when reproducing OpenVLA-OFT paper results, training custom VLA action heads (L1 or diffusion), deploying server-client inference for ALOHA, or debugging normalization, LoRA merge, and cross-GPU issues.

技能
Orchestra-Research

Unsloth

100

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

技能
davila7

PyTorch Lightning

100

Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.

技能
K-Dense-AI

Peft Fine Tuning

99

Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.

技能
Orchestra-Research

Huggingface Llm Trainer

99

Train or fine-tune language and vision models using TRL (Transformer Reinforcement Learning) or Unsloth with Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, model selection/leaderboards and model persistence. Use for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.

技能
huggingface