跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

NanoGPT

技能 活跃

Educational GPT implementation in ~300 lines. Reproduces GPT-2 (124M) on OpenWebText. Clean, hackable code for learning transformers. By Andrej Karpathy. Perfect for understanding GPT architecture from scratch. Train on Shakespeare (CPU) or OpenWebText (multi-GPU).

目的

To provide a clear, concise, and hackable implementation of the GPT-2 architecture for educational purposes, enabling users to understand transformer models from scratch.

功能

  • Minimalist GPT-2 (124M) implementation
  • Reproduces GPT-2 on OpenWebText
  • Clean, hackable code for learning transformers
  • Supports training on CPU (Shakespeare) or multi-GPU (OpenWebText)
  • Includes example configurations and data preparation scripts

使用场景

  • Learning transformer architecture from scratch
  • Experimenting with GPT model components
  • Teaching or understanding deep learning models
  • Prototyping new transformer ideas

非目标

  • Production-ready deployment of LLMs
  • State-of-the-art performance benchmarks
  • Large-scale distributed training beyond 8 GPUs
  • Complex model tuning for specific applications

工作流

  1. Prepare data (e.g., Shakespeare or OpenWebText)
  2. Configure training parameters
  3. Train the model
  4. Generate text from the trained model

实践

  • Model Architecture
  • Transformer Implementation
  • Educational Code

先决条件

  • Python 3.8+
  • PyTorch
  • torch, numpy, transformers, datasets, tiktoken, wandb, tqdm

Practical Utility

  • info:Production readinessWhile the code is clean and educational, it is presented as an educational tool rather than a production-ready system. Training large models like GPT-2 requires significant computational resources not typically available for immediate production use.

Trust

  • warning:Issues AttentionIn the last 90 days, 17 issues were opened and 4 were closed, indicating a low closure rate and potentially slow maintainer response.

安装

npx skills add davila7/claude-code-templates

通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。

质量评分

87 /100
1 day ago 分析

信任信号

最近提交1 day ago
星标27.2k
许可证MIT
状态
查看源代码

类似扩展

Nanogpt

95

Educational GPT implementation in ~300 lines. Reproduces GPT-2 (124M) on OpenWebText. Clean, hackable code for learning transformers. By Andrej Karpathy. Perfect for understanding GPT architecture from scratch. Train on Shakespeare (CPU) or OpenWebText (multi-GPU).

技能
Orchestra-Research

PyTorch Lightning

100

Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.

技能
K-Dense-AI

Pytorch Lightning

99

High-level PyTorch framework with Trainer class, automatic distributed training (DDP/FSDP/DeepSpeed), callbacks system, and minimal boilerplate. Scales from laptop to supercomputer with same code. Use when you want clean training loops with built-in best practices.

技能
Orchestra-Research

Nnsight Remote Interpretability

99

Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.

技能
davila7

Huggingface Accelerate

99

Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.

技能
davila7

TorchTitan Distributed LLM Pretraining

99

Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use when pretraining Llama 3.1, DeepSeek V3, or custom models at scale from 8 to 512+ GPUs with Float8, torch.compile, and distributed checkpointing.

技能
Orchestra-Research