Torch Geometric
Skill Verifiziert AktivGuide for building Graph Neural Networks with PyTorch Geometric (PyG). Use this skill whenever the user asks about graph neural networks, GNNs, node classification, link prediction, graph classification, message passing networks, heterogeneous graphs, neighbor sampling, or any task involving torch_geometric / PyG. Also trigger when you see imports from torch_geometric, or the user mentions graph convolutions (GCN, GAT, GraphSAGE, GIN), graph data structures, or working with relational/network data. Even if the user just says 'graph learning' or 'geometric deep learning', use this skill.
To serve as a comprehensive guide for building Graph Neural Networks with PyTorch Geometric, enabling users to leverage PyG effectively for various graph learning tasks.
Funktionen
- Detailed explanation of PyG core concepts (Data, HeteroData, Transforms)
- Guidance on building GNN models with built-in layers and custom MessagePassing
- Examples for node classification, graph classification, and link prediction tasks
- Strategies for scaling GNNs to large graphs using neighbor sampling (NeighborLoader)
- Comprehensive resources for heterogeneous graph learning and explainability
Anwendungsfälle
- Learning to build GNNs with PyTorch Geometric from scratch.
- Implementing node classification, link prediction, or graph classification tasks.
- Developing GNN models for large graphs that do not fit into GPU memory.
- Working with heterogeneous graph data structures.
Nicht-Ziele
- Providing a direct interface to specific GNN models or pre-trained weights.
- Handling the installation and management of PyTorch or CUDA environments.
- Covering advanced GNN architectures beyond the scope of PyTorch Geometric's standard offerings.
Workflow
- Understand core PyG concepts (Data, HeteroData, Transforms).
- Learn to build GNN models using built-in layers or MessagePassing.
- Implement task-specific patterns (node/graph classification, link prediction).
- Apply scaling strategies (NeighborLoader) for large graphs.
- Develop models for heterogeneous graphs and explore explainability.
Praktiken
- GNN model development
- Graph data handling
- Message passing implementation
- Scalable GNN training
- Heterogeneous graph learning
Voraussetzungen
- PyTorch installed
- PyTorch Geometric (`torch_geometric`) installed via `uv`
- Optional: `pyg-lib`, `torch-scatter`, `torch-sparse`, `torch-cluster` for accelerated operations
Installation
npx skills add K-Dense-AI/claude-scientific-skillsFührt das Vercel skills CLI (skills.sh) via npx aus — benötigt Node.js lokal und mindestens einen installierten skills-kompatiblen Agent (Claude Code, Cursor, Codex, …). Setzt voraus, dass das Repo dem agentskills.io-Format folgt.
Qualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
PyTorch Lightning
100Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.
Nnsight Remote Interpretability
99Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.
Pytorch Lightning
99High-level PyTorch framework with Trainer class, automatic distributed training (DDP/FSDP/DeepSpeed), callbacks system, and minimal boilerplate. Scales from laptop to supercomputer with same code. Use when you want clean training loops with built-in best practices.
Huggingface Accelerate
99Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.
HuggingFace Accelerate
97Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.
TimesFM Forecasting
100Zero-shot time series forecasting with Google's TimesFM foundation model. Use for any univariate time series (sales, sensors, energy, vitals, weather) without training a custom model. Supports CSV/DataFrame/array inputs with point forecasts and prediction intervals. Includes a preflight system checker script to verify RAM/GPU before first use.