Implement Diffusion Network
技能 已验证 活跃Implement a generative diffusion model (DDPM or score-based) with noise scheduling, U-Net architecture, training loop, and sampling procedures including DDIM acceleration. Use when building a generative model for image, audio, or molecular synthesis; implementing DDPM from a research paper; adding a custom noise schedule or conditioning mechanism; replacing a GAN-based generator with a diffusion alternative; or prototyping before scaling with production frameworks like diffusers.
To enable users to build, understand, and prototype generative diffusion models by providing a from-scratch implementation and detailed guidance.
功能
- Implements DDPM and score-based diffusion models
- Includes noise scheduling (cosine, linear) and U-Net architecture
- Provides training loop and sampling procedures (DDPM, DDIM)
- Offers detailed Python code examples for each component
- Discusses evaluation metrics like FID and LPIPS
使用场景
- Building generative models for image, audio, or molecular synthesis
- Implementing diffusion models from research papers
- Adding custom noise schedules or conditioning mechanisms
- Replacing GANs with diffusion alternatives
- Prototyping diffusion models before scaling
非目标
- Providing pre-trained models or direct integration with production frameworks like diffusers
- Handling end-to-end deployment pipelines
Practical Utility
- info:Production readinessThe skill provides a detailed implementation covering the core aspects of diffusion models but lacks explicit guidance on deployment or integration into larger production frameworks beyond prototyping.
Execution
- info:ValidationThe skill includes type hints and basic shape checks, but lacks formal schema validation libraries like Pydantic for all inputs and outputs.
Install
- info:Installation instructionThe README provides multiple installation methods (plugin, CLI) and verification steps, but the specific skill does not have a direct copy-paste invocation example within its own SKILL.md.
安装
/plugin install agent-almanac@pjt222-agent-almanac质量评分
已验证类似扩展
PyTorch Lightning
100Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.
Pytorch Lightning
99High-level PyTorch framework with Trainer class, automatic distributed training (DDP/FSDP/DeepSpeed), callbacks system, and minimal boilerplate. Scales from laptop to supercomputer with same code. Use when you want clean training loops with built-in best practices.
Nnsight Remote Interpretability
99Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.
Huggingface Accelerate
99Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.
Torch Geometric
98Guide for building Graph Neural Networks with PyTorch Geometric (PyG). Use this skill whenever the user asks about graph neural networks, GNNs, node classification, link prediction, graph classification, message passing networks, heterogeneous graphs, neighbor sampling, or any task involving torch_geometric / PyG. Also trigger when you see imports from torch_geometric, or the user mentions graph convolutions (GCN, GAT, GraphSAGE, GIN), graph data structures, or working with relational/network data. Even if the user just says 'graph learning' or 'geometric deep learning', use this skill.
Analyze Generative Diffusion Model
98Analyze pre-trained generative diffusion models (Stable Diffusion, DALL-E, Flux) by computing quality metrics (FID, IS, CLIP score, precision/recall), inspecting noise schedules, extracting and visualizing attention maps, and probing latent spaces. Use when evaluating a pre-trained generative diffusion model's output quality, comparing noise schedule variants, analyzing cross-attention patterns for text-conditioned generation, interpolating between latent codes, or detecting out-of-distribution inputs.