Skip to main content

Nnsight Remote Interpretability

Skill Verified Active

Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.

Purpose

To democratize access to large language model internals for research and experimentation by enabling consistent interpretability workflows across various model sizes and execution environments.

Features

  • Interpret and manipulate neural network internals
  • Run experiments on massive models (70B+) remotely via NDIF
  • Use the same code for local and remote execution
  • Support for any PyTorch architecture
  • Access activations, gradients, and logits for analysis

Use Cases

  • Running interpretability experiments on models too large for local GPUs
  • Performing multi-token generation interventions
  • Sharing activations between different prompts
  • Analyzing PyTorch models of any architecture, including custom ones

Non-Goals

  • Providing a unified API across all model types (TransformerLens serves this)
  • Declarative, shareable interventions (pyvene is for this)
  • Training SAEs (SAELens is for this)
  • Working exclusively with small models locally (TransformerLens may be simpler)

Installation

First, add the marketplace

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

Quality Score

Verified
99 /100
Analyzed about 20 hours ago

Trust Signals

Last commit16 days ago
Stars8.3k
LicenseMIT
Status
View Source

Similar Extensions

Nnsight Remote Interpretability

99

Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.

Skill
davila7

PyTorch Lightning

100

Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.

Skill
K-Dense-AI

Implementing Llms Litgpt

100

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

Skill
davila7

Crabbox

100

Use Crabbox for OpenClaw remote validation across Linux, macOS, Windows, and WSL2. Default to Blacksmith Testbox for broad Linux proof; includes direct Blacksmith and owned AWS/Hetzner fallback notes when Crabbox fails.

Skill
steipete

Transformer Lens Interpretability

99

Provides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate transformer internals via HookPoints and activation caching. Use when reverse-engineering model algorithms, studying attention patterns, or performing activation patching experiments.

Skill
Orchestra-Research

ML Training Recipes

99

Battle-tested PyTorch training recipes for all domains — LLMs, vision, diffusion, medical imaging, protein/drug discovery, spatial omics, genomics. Covers training loops, optimizer selection (AdamW, Muon), LR scheduling, mixed precision, debugging, and systematic experimentation. Use when training or fine-tuning neural networks, debugging loss spikes or OOM, choosing architectures, or optimizing GPU throughput.

Skill
Orchestra-Research

© 2025 SkillRepo · Find the right skill, skip the noise.