Long Context
Skill Verifiziert AktivExtend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models beyond original context limits, or implementing efficient positional encodings. Covers rotary embeddings, attention biases, interpolation methods, and extrapolation strategies for LLMs.
To equip users with advanced knowledge and practical guidance on extending transformer model context windows for processing long documents and improving LLM capabilities.
Funktionen
- Explains RoPE, YaRN, ALiBi, and Position Interpolation
- Provides Python code implementations for core techniques
- Details fine-tuning strategies for context extension
- Covers production deployment and memory optimization
- Compares different context extension methods
Anwendungsfälle
- Processing long documents (32k-128k+ tokens)
- Extending pre-trained models beyond original context limits
- Implementing efficient positional encodings
- Training models with length extrapolation capabilities
Nicht-Ziele
- Replacing existing transformer models
- Providing pre-trained models with extended context
- Covering all possible positional encoding methods
Trust
- info:Issues AttentionThere are 17 open issues and 4 closed issues in the last 90 days, indicating a closure rate below 50% and a moderate number of open issues, suggesting maintainer responsiveness could be improved.
Installation
npx skills add davila7/claude-code-templatesFührt das Vercel skills CLI (skills.sh) via npx aus — benötigt Node.js lokal und mindestens einen installierten skills-kompatiblen Agent (Claude Code, Cursor, Codex, …). Setzt voraus, dass das Repo dem agentskills.io-Format folgt.
Qualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
Long Context
95Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models beyond original context limits, or implementing efficient positional encodings. Covers rotary embeddings, attention biases, interpolation methods, and extrapolation strategies for LLMs.
Transformers
98This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.
Context Mode Ops
100Verwalten Sie GitHub-Issues, PRs, Releases und Marketing mit parallelen Subagenten-Armeen im Context-Mode. Orchestriert 10-20 dynamische Agenten pro Aufgabe. Verwenden Sie dies bei der Triage von Issues, der Überprüfung von PRs, der Veröffentlichung von Versionen, dem Schreiben von LinkedIn-Posts, der Ankündigung von Releases, der Behebung von Fehlern, dem Mergen von Beiträgen, der Validierung von ENV-Variablen, dem Testen von Adaptern oder dem Synchronisieren von Branches.
Chat Format
100Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly
Wrap Up Ritual
100End-of-session ritual that audits changes, runs quality checks, captures learnings, and produces a session summary. Use when saying "wrap up", "done for the day", "finish coding", or ending a coding session.