Knowledge Distillation
技能 已验证 活跃Compress large language models using knowledge distillation from teacher to student models. Use when deploying smaller models with retained performance, transferring GPT-4 capabilities to open-source models, or reducing inference costs. Covers temperature scaling, soft targets, reverse KLD, logit distillation, and MiniLLM training strategies.
To enable users to compress large language models effectively, retaining performance while reducing size and inference costs, by providing practical guidance and code examples for knowledge distillation.
功能
- Compress LLMs using knowledge distillation
- Transfer capabilities from large to smaller models
- Reduce inference costs
- Implement temperature scaling, soft targets, and reverse KLD
- Provide training scripts and evaluation methods
使用场景
- Compressing models from 70B to 7B parameters while retaining performance
- Transferring capabilities from proprietary models (like GPT-4) to open-source alternatives
- Reducing inference costs by deploying smaller, distilled models
- Creating specialized models by distilling domain-specific knowledge
非目标
- Training models from scratch without a teacher model
- Performing inference on distilled models (focus is on training)
- Covering advanced MLOps deployment strategies beyond the training script
Execution
- info:Pinned dependenciesDependencies are listed, but not strictly pinned with lockfiles within the SKILL.md, though standard package managers would typically handle this during installation.
Practical Utility
- info:Edge casesThe skill touches on hyperparameters and model size ratios, which are related to effective application, but does not explicitly document failure modes or recovery steps for specific scenarios.
安装
npx skills add davila7/claude-code-templates通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。
质量评分
已验证类似扩展
PyTorch Lightning
100Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.
TimesFM Forecasting
100Zero-shot time series forecasting with Google's TimesFM foundation model. Use for any univariate time series (sales, sensors, energy, vitals, weather) without training a custom model. Supports CSV/DataFrame/array inputs with point forecasts and prediction intervals. Includes a preflight system checker script to verify RAM/GPU before first use.
Nnsight Remote Interpretability
99Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.
Knowledge Distillation
98Compress large language models using knowledge distillation from teacher to student models. Use when deploying smaller models with retained performance, transferring GPT-4 capabilities to open-source models, or reducing inference costs. Covers temperature scaling, soft targets, reverse KLD, logit distillation, and MiniLLM training strategies.
Chat Format
100Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly