Skip to main content

Pytorch Fsdp2

Skill Verified Active

Adds PyTorch FSDP2 (fully_shard) to training scripts with correct init, sharding, mixed precision/offload config, and distributed checkpointing. Use when models exceed single-GPU memory or when you need DTensor-based sharding with DeviceMesh.

Purpose

Enables users to correctly implement PyTorch FSDP2 in their training scripts to manage large models that exceed single-GPU memory or require advanced sharding strategies.

Features

  • Adds PyTorch FSDP2 integration to training scripts.
  • Correct initialization, sharding, and mixed precision configuration.
  • Guidance on distributed checkpointing (DCP).
  • Handles models exceeding single-GPU memory.
  • Supports DTensor-based sharding with DeviceMesh.

Use Cases

  • When models do not fit on a single GPU.
  • For users needing DTensor-based per-parameter sharding.
  • When composing DP with Tensor Parallelism using DeviceMesh.
  • Integrating FSDP2 into existing PyTorch training loops.

Non-Goals

  • Replacing standard DistributedDataParallel (DDP) when model fits on one GPU.
  • Using older FSDP1 versions.
  • Providing backwards-compatible checkpoints across PyTorch versions without careful management.
  • Supporting PyTorch versions without the FSDP2 stack.

Installation

First, add the marketplace

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

Quality Score

Verified
95 /100
Analyzed 1 day ago

Trust Signals

Last commit17 days ago
Stars8.3k
LicenseMIT
Status
View Source

Similar Extensions

Pytorch Lightning

99

High-level PyTorch framework with Trainer class, automatic distributed training (DDP/FSDP/DeepSpeed), callbacks system, and minimal boilerplate. Scales from laptop to supercomputer with same code. Use when you want clean training loops with built-in best practices.

Skill
Orchestra-Research

Huggingface Accelerate

99

Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.

Skill
davila7

TorchTitan Distributed LLM Pretraining

99

Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use when pretraining Llama 3.1, DeepSeek V3, or custom models at scale from 8 to 512+ GPUs with Float8, torch.compile, and distributed checkpointing.

Skill
Orchestra-Research

HuggingFace Accelerate

97

Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.

Skill
Orchestra-Research

PyTorch Lightning

100

Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.

Skill
K-Dense-AI

Nnsight Remote Interpretability

99

Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.

Skill
davila7

© 2025 SkillRepo · Find the right skill, skip the noise.