Long Context
Skill Verified ActiveExtend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models beyond original context limits, or implementing efficient positional encodings. Covers rotary embeddings, attention biases, interpolation methods, and extrapolation strategies for LLMs.
To equip users with advanced knowledge and practical guidance on extending transformer model context windows for processing long documents and improving LLM capabilities.
Features
- Explains RoPE, YaRN, ALiBi, and Position Interpolation
- Provides Python code implementations for core techniques
- Details fine-tuning strategies for context extension
- Covers production deployment and memory optimization
- Compares different context extension methods
Use Cases
- Processing long documents (32k-128k+ tokens)
- Extending pre-trained models beyond original context limits
- Implementing efficient positional encodings
- Training models with length extrapolation capabilities
Non-Goals
- Replacing existing transformer models
- Providing pre-trained models with extended context
- Covering all possible positional encoding methods
Trust
- info:Issues AttentionThere are 17 open issues and 4 closed issues in the last 90 days, indicating a closure rate below 50% and a moderate number of open issues, suggesting maintainer responsiveness could be improved.
Installation
npx skills add davila7/claude-code-templatesRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
VerifiedTrust Signals
Similar Extensions
Long Context
95Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models beyond original context limits, or implementing efficient positional encodings. Covers rotary embeddings, attention biases, interpolation methods, and extrapolation strategies for LLMs.
Transformers
98This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.
Context Mode Ops
100Manage context-mode GitHub issues, PRs, releases, and marketing with parallel subagent army. Orchestrates 10-20 dynamic agents per task. Use when triaging issues, reviewing PRs, releasing versions, writing LinkedIn posts, announcing releases, fixing bugs, merging contributions, validating ENV vars, testing adapters, or syncing branches.
Chat Format
100Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly
Wrap Up Ritual
100End-of-session ritual that audits changes, runs quality checks, captures learnings, and produces a session summary. Use when saying "wrap up", "done for the day", "finish coding", or ending a coding session.