Torch Geometric
技能 已验证 活跃Guide for building Graph Neural Networks with PyTorch Geometric (PyG). Use this skill whenever the user asks about graph neural networks, GNNs, node classification, link prediction, graph classification, message passing networks, heterogeneous graphs, neighbor sampling, or any task involving torch_geometric / PyG. Also trigger when you see imports from torch_geometric, or the user mentions graph convolutions (GCN, GAT, GraphSAGE, GIN), graph data structures, or working with relational/network data. Even if the user just says 'graph learning' or 'geometric deep learning', use this skill.
To serve as a comprehensive guide for building Graph Neural Networks with PyTorch Geometric, enabling users to leverage PyG effectively for various graph learning tasks.
功能
- Detailed explanation of PyG core concepts (Data, HeteroData, Transforms)
- Guidance on building GNN models with built-in layers and custom MessagePassing
- Examples for node classification, graph classification, and link prediction tasks
- Strategies for scaling GNNs to large graphs using neighbor sampling (NeighborLoader)
- Comprehensive resources for heterogeneous graph learning and explainability
使用场景
- Learning to build GNNs with PyTorch Geometric from scratch.
- Implementing node classification, link prediction, or graph classification tasks.
- Developing GNN models for large graphs that do not fit into GPU memory.
- Working with heterogeneous graph data structures.
非目标
- Providing a direct interface to specific GNN models or pre-trained weights.
- Handling the installation and management of PyTorch or CUDA environments.
- Covering advanced GNN architectures beyond the scope of PyTorch Geometric's standard offerings.
工作流
- Understand core PyG concepts (Data, HeteroData, Transforms).
- Learn to build GNN models using built-in layers or MessagePassing.
- Implement task-specific patterns (node/graph classification, link prediction).
- Apply scaling strategies (NeighborLoader) for large graphs.
- Develop models for heterogeneous graphs and explore explainability.
实践
- GNN model development
- Graph data handling
- Message passing implementation
- Scalable GNN training
- Heterogeneous graph learning
先决条件
- PyTorch installed
- PyTorch Geometric (`torch_geometric`) installed via `uv`
- Optional: `pyg-lib`, `torch-scatter`, `torch-sparse`, `torch-cluster` for accelerated operations
安装
npx skills add K-Dense-AI/claude-scientific-skills通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。
质量评分
已验证类似扩展
PyTorch Lightning
100Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.
Nnsight Remote Interpretability
99Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.
Pytorch Lightning
99High-level PyTorch framework with Trainer class, automatic distributed training (DDP/FSDP/DeepSpeed), callbacks system, and minimal boilerplate. Scales from laptop to supercomputer with same code. Use when you want clean training loops with built-in best practices.
Huggingface Accelerate
99Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.
HuggingFace Accelerate
97Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.
TimesFM Forecasting
100Zero-shot time series forecasting with Google's TimesFM foundation model. Use for any univariate time series (sales, sensors, energy, vitals, weather) without training a custom model. Supports CSV/DataFrame/array inputs with point forecasts and prediction intervals. Includes a preflight system checker script to verify RAM/GPU before first use.