跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

ML Training Recipes

技能 已验证 活跃

Battle-tested PyTorch training recipes for all domains — LLMs, vision, diffusion, medical imaging, protein/drug discovery, spatial omics, genomics. Covers training loops, optimizer selection (AdamW, Muon), LR scheduling, mixed precision, debugging, and systematic experimentation. Use when training or fine-tuning neural networks, debugging loss spikes or OOM, choosing architectures, or optimizing GPU throughput.

目的

To offer expert-level, production-ready PyTorch training patterns and debugging strategies, enabling users to efficiently train and fine-tune neural networks.

功能

  • PyTorch training recipes for LLMs, vision, diffusion, and biomedical domains
  • Covers training loops, optimizer selection (AdamW, Muon), and LR scheduling
  • Includes mixed precision, debugging techniques, and systematic experimentation patterns
  • Provides reference files for detailed architecture, scaling laws, and optimizer configurations

使用场景

  • Training or fine-tuning neural networks with PyTorch
  • Debugging common training issues like loss spikes or out-of-memory errors
  • Selecting appropriate model architectures and optimizers for specific data types and scales
  • Optimizing GPU throughput and resource utilization during training

非目标

  • Providing pre-trained models
  • Handling deployment or inference-specific optimizations
  • Offering recipes for frameworks other than PyTorch

工作流

  1. Understand data type and scale
  2. Select appropriate architecture based on decision trees
  3. Configure optimizer and LR schedule
  4. Implement training loop with mixed precision and EMA
  5. Debug issues using provided checklists and patterns
  6. Track experiments systematically for comparison

实践

  • Code Quality
  • Reproducibility
  • Best Practices

先决条件

  • PyTorch (>=2.0.0)
  • Python environment with necessary libraries (e.g., transformers, torchvision, monai, etc.)

安装

请先添加 Marketplace

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

质量评分

已验证
99 /100
1 day ago 分析

信任信号

最近提交17 days ago
星标8.3k
许可证MIT
状态
查看源代码

类似扩展

Arize Prompt Optimization

100

Optimizes, improves, and debugs LLM prompts using production trace data, evaluations, and annotations. Extracts prompts from spans, gathers performance signal, and runs a data-driven optimization loop using the ax CLI. Use when the user mentions optimize prompt, improve prompt, make AI respond better, improve output quality, prompt engineering, prompt tuning, or system prompt improvement.

技能
github

Implementing Llms Litgpt

100

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

技能
davila7

Unsloth

100

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

技能
davila7

Prompt Optimization

100

应用提示重复以提高非推理 LLM 的准确性

技能
asklokesh

Pytorch Lightning

99

High-level PyTorch framework with Trainer class, automatic distributed training (DDP/FSDP/DeepSpeed), callbacks system, and minimal boilerplate. Scales from laptop to supercomputer with same code. Use when you want clean training loops with built-in best practices.

技能
Orchestra-Research

Open Targets Platform Query Skill

100

Query Open Targets Platform for target-disease associations, drug target discovery, tractability/safety data, genetics/omics evidence, known drugs, for therapeutic target identification. Part of the AlterLab Academic Skills suite.

技能
AlterLab-IEU