TransformerLens Mechanistic Interpretability
Skill Verified ActiveProvides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate transformer internals via HookPoints and activation caching. Use when reverse-engineering model algorithms, studying attention patterns, or performing activation patching experiments.
To enable researchers to deeply inspect and manipulate the internals of transformer models for understanding their learned algorithms and behavior.
Features
- Inspect and manipulate transformer internals
- Utilize HookPoints and activation caching
- Perform activation patching and causal tracing
- Analyze attention patterns and circuits
- Support for 50+ transformer architectures
Use Cases
- Reverse-engineering model algorithms
- Studying attention patterns and information flow
- Performing activation patching or causal tracing experiments
- Analyzing specific circuits like induction heads
Non-Goals
- Working with non-transformer architectures
- Training or analyzing Sparse Autoencoders directly
- Providing remote execution on massive models
- Offering higher-level causal intervention abstractions
Practices
- Model Interpretability
- Transformer Analysis
- Code Research
Prerequisites
- Python >= 3.8
- transformer-lens >= 2.0.0
- torch >= 2.0.0
Trust
- info:Issues AttentionOpen issues (90d): 17, Closed issues (90d): 4. The closure rate is low, indicating slower response times for open issues.
Installation
npx skills add davila7/claude-code-templatesRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
VerifiedTrust Signals
Similar Extensions
Transformer Lens Interpretability
99Provides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate transformer internals via HookPoints and activation caching. Use when reverse-engineering model algorithms, studying attention patterns, or performing activation patching experiments.
Embedding Strategies
100Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optimizing embedding quality for specific domains.
Aws Cdk Development
100AWS Cloud Development Kit (CDK) expert for building cloud infrastructure with TypeScript/Python. Use when creating CDK stacks, defining CDK constructs, implementing infrastructure as code, or when the user mentions CDK, CloudFormation, IaC, cdk synth, cdk deploy, or wants to define AWS infrastructure programmatically. Covers CDK app structure, construct patterns, stack composition, and deployment workflows.
Fit Drift Diffusion Model
100Fit cognitive drift-diffusion models (Ratcliff DDM) to reaction time and accuracy data with parameter estimation (drift rate, boundary separation, non-decision time), model comparison, and parameter recovery validation. Use when modeling binary decision-making with reaction time data, estimating cognitive parameters from experimental data, comparing sequential sampling model variants, or decomposing speed-accuracy tradeoff effects into latent cognitive components.
Ui Ux Pro Max
100UI/UX design intelligence with searchable style, palette, typography, and chart databases. Use when designing UI components, choosing colors/fonts, reviewing code for UX issues, building landing pages, or implementing responsive layouts.
Google Tts
100Convert documents and text to audio using Google Cloud Text-to-Speech. Use this skill when the user wants to: narrate a document, read aloud text, generate audio from a file, convert text to speech, create a recording of documentation or analysis, create a podcast from a document, or use Google TTS/text-to-speech. Trigger phrases: "read this aloud", "narrate this", "create a recording", "text to speech", "TTS", "convert to audio", "audio from document", "listen to this", "generate audio", "google tts", "create a podcast".