[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"extension-skill-Orchestra-Research-tensorrt-llm-de":3,"guides-for-Orchestra-Research-tensorrt-llm":1863,"similar-k173qdxjdamys2gqnh6takycgh86m8aj-de":1864},{"_creationTime":4,"_id":5,"children":6,"community":7,"display":9,"evaluation":15,"identity":252,"isFallback":233,"parentExtension":257,"providers":315,"relations":319,"repo":320,"tags":1861,"workflow":1862},1778695116697.188,"k173qdxjdamys2gqnh6takycgh86m8aj",[],{"reviewCount":8},0,{"description":10,"installMethods":11,"name":13,"sourceUrl":14},"Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.",{"claudeCode":12},"Orchestra-Research/AI-Research-SKILLs","tensorrt-llm","https://github.com/Orchestra-Research/AI-Research-SKILLs",{"_creationTime":16,"_id":17,"extensionId":5,"locale":18,"result":19,"trustSignals":231,"workflow":250},1778696687593.4363,"kn7bk7fjpxa9j2c6dnmf526fy986mc65","en",{"checks":20,"evaluatedAt":195,"extensionSummary":196,"features":197,"nonGoals":203,"promptVersionExtension":208,"promptVersionScoring":209,"purpose":210,"rationale":211,"score":212,"summary":213,"tags":214,"targetMarket":224,"tier":225,"useCases":226},[21,26,29,32,36,39,44,48,51,54,58,62,65,69,72,75,78,81,84,87,91,95,99,103,107,110,114,117,121,124,127,130,133,136,139,143,147,150,153,157,160,163,166,169,173,176,179,182,185,188,192],{"category":22,"check":23,"severity":24,"summary":25},"Practical Utility","Problem relevance","pass","The description clearly states the problem of optimizing LLM inference for maximum throughput and lowest latency on NVIDIA GPUs, targeting production deployments.",{"category":22,"check":27,"severity":24,"summary":28},"Unique selling proposition","The skill offers significant value over a basic PyTorch deployment by leveraging NVIDIA TensorRT-LLM for substantial performance gains (10-100x faster inference) through advanced features like quantization and batching.",{"category":22,"check":30,"severity":24,"summary":31},"Production readiness","The skill provides comprehensive documentation for installation, serving with trtllm-serve, inference, monitoring, and deployment, indicating readiness for production workflows.",{"category":33,"check":34,"severity":24,"summary":35},"Scope","Single responsibility principle","The extension focuses exclusively on optimizing LLM inference using NVIDIA TensorRT-LLM, covering performance tuning, deployment, and serving, without venturing into unrelated domains.",{"category":33,"check":37,"severity":24,"summary":38},"Description quality","The description is accurate, concise, and effectively communicates the core purpose and key use cases of the TensorRT-LLM skill.",{"category":40,"check":41,"severity":42,"summary":43},"Invocation","Scoped tools","not_applicable","This skill appears to be a wrapper around a library and does not expose distinct tools in the manifest, making this check not applicable.",{"category":45,"check":46,"severity":24,"summary":47},"Documentation","Configuration & parameter reference","The documentation provides detailed explanations and examples for various parameters related to model loading, parallelism, quantization, and serving configurations.",{"category":33,"check":49,"severity":42,"summary":50},"Tool naming","As this skill does not expose specific user-facing tools via a manifest, tool naming conventions are not applicable.",{"category":33,"check":52,"severity":42,"summary":53},"Minimal I/O surface","This skill operates primarily through library calls and API interactions described in documentation rather than discrete tools with input/output schemas, making this check not applicable.",{"category":55,"check":56,"severity":24,"summary":57},"License","License usability","The license is MIT, as detected in trust signals and confirmed in the LICENSE file, which is a permissive open-source license.",{"category":59,"check":60,"severity":24,"summary":61},"Maintenance","Commit recency","The repository was last pushed to on April 28, 2026, which is within the last 90 days, indicating active maintenance.",{"category":59,"check":63,"severity":24,"summary":64},"Dependency Management","The presence of a lockfile (hasLockfile: true) and explicit dependency declarations in SKILL.md indicate suitable dependency management practices.",{"category":66,"check":67,"severity":24,"summary":68},"Security","Secret Management","The documentation and provided code examples do not indicate the handling or exposure of secrets; no sensitive information is shown in output or committed files.",{"category":66,"check":70,"severity":24,"summary":71},"Injection","The skill focuses on library usage and deployment configurations, with no indication of loading or executing untrusted third-party data as instructions.",{"category":66,"check":73,"severity":24,"summary":74},"Transitive Supply-Chain Grenades","The skill relies on local installations and bundled library usage, with no evidence of runtime fetching of external code or data that could be manipulated.",{"category":66,"check":76,"severity":24,"summary":77},"Sandbox Isolation","The skill's operations are confined to library calls and configuration, with no actions that would modify files outside the project or system settings.",{"category":66,"check":79,"severity":24,"summary":80},"Sandbox escape primitives","No detached process spawns or deny-retry loops were observed in the provided code or documentation.",{"category":66,"check":82,"severity":24,"summary":83},"Data Exfiltration","The skill's purpose is inference optimization, and there is no evidence of any outbound calls designed to exfiltrate user data or confidential information.",{"category":66,"check":85,"severity":24,"summary":86},"Hidden Text Tricks","The bundled content and documentation appear free of hidden text tricks, invisible Unicode characters, or other obfuscation techniques.",{"category":88,"check":89,"severity":24,"summary":90},"Hooks","Opaque code execution","The skill does not employ obfuscated code, base64 payloads, or runtime script fetching; all code appears to be plain and readable.",{"category":92,"check":93,"severity":24,"summary":94},"Portability","Structural Assumption","The skill's installation and usage instructions focus on library dependencies and server configurations, not on specific user project file layouts.",{"category":96,"check":97,"severity":24,"summary":98},"Trust","Issues Attention","With 4 issues opened and 8 closed in the last 90 days, the closure rate is high (66.7%), indicating good maintainer engagement.",{"category":100,"check":101,"severity":24,"summary":102},"Versioning","Release Management","A clear version (1.0.0) is declared in the SKILL.md frontmatter, and a CHANGELOG is present, indicating robust release management.",{"category":104,"check":105,"severity":24,"summary":106},"Execution","Validation","While direct code validation via schema libraries isn't explicitly detailed for the entire skill, the provided examples and serving configurations appear to be well-formed and adhere to expected formats.",{"category":66,"check":108,"severity":24,"summary":109},"Unguarded Destructive Operations","The skill is focused on optimization and inference, and does not involve destructive operations like file deletion or infrastructure changes.",{"category":111,"check":112,"severity":24,"summary":113},"Code Execution","Error Handling","The provided documentation and examples for trtllm-serve and the Python API suggest structured error reporting and graceful handling of common issues like OOMs and timeouts.",{"category":111,"check":115,"severity":24,"summary":116},"Logging","The serving guide details Prometheus metrics and health checks, indicating a good level of observability and auditability for production deployments.",{"category":118,"check":119,"severity":24,"summary":120},"Compliance","GDPR","The skill optimizes LLM inference and does not appear to operate on or process personal data.",{"category":118,"check":122,"severity":24,"summary":123},"Target market","The skill's functionality is hardware-specific (NVIDIA GPUs) but its operational principles are globally applicable; targetMarket is set to 'global'.",{"category":92,"check":125,"severity":24,"summary":126},"Runtime stability","The skill relies on standard NVIDIA tooling and Python, with Docker and Kubernetes deployment guidance, indicating good cross-platform compatibility.",{"category":45,"check":128,"severity":24,"summary":129},"README","A comprehensive README exists and clearly outlines the project's mission, available skills, installation methods, and use cases.",{"category":33,"check":131,"severity":42,"summary":132},"Tool surface size","This skill is primarily library-based and does not expose a distinct set of tools in the manifest for counting.",{"category":40,"check":134,"severity":42,"summary":135},"Overlapping near-synonym tools","As this skill does not expose multiple tools, there are no overlapping near-synonym tools to evaluate.",{"category":45,"check":137,"severity":24,"summary":138},"Phantom features","All advertised features, such as FP8 quantization, in-flight batching, and multi-GPU scaling, are directly supported by TensorRT-LLM and are well-documented.",{"category":140,"check":141,"severity":24,"summary":142},"Install","Installation instruction","The README provides clear installation instructions for both Docker and pip, and the SKILL.md includes example invocations for basic inference and serving.",{"category":144,"check":145,"severity":24,"summary":146},"Errors","Actionable error messages","The troubleshooting section in the documentation addresses common errors like OOMs and high latency, providing diagnostic steps and remediation advice.",{"category":104,"check":148,"severity":24,"summary":149},"Pinned dependencies","The `pip install` command in the SKILL.md specifies a version (`tensorrt_llm==1.2.0rc3`), and the Dockerfile suggests a specific image tag, indicating pinned dependencies.",{"category":33,"check":151,"severity":42,"summary":152},"Dry-run preview","The skill focuses on inference and optimization, not state-changing operations that would typically require a dry-run feature.",{"category":154,"check":155,"severity":24,"summary":156},"Protocol","Idempotent retry & timeouts","The serving guide mentions long timeouts for proxying and error handling for timeouts, suggesting consideration for retries and timeouts, though specific idempotency for mutations isn't directly applicable to inference.",{"category":118,"check":158,"severity":24,"summary":159},"Telemetry opt-in","There is no mention of telemetry collection in the documentation; operations are local or related to performance metrics exposed via Prometheus, which is standard for serving.",{"category":40,"check":161,"severity":24,"summary":162},"Precise Purpose","The purpose is precisely defined as optimizing LLM inference with NVIDIA TensorRT-LLM for specific hardware and performance goals, with clear comparisons to alternatives like vLLM and llama.cpp.",{"category":40,"check":164,"severity":24,"summary":165},"Concise Frontmatter","The SKILL.md frontmatter is concise and self-contained, providing essential metadata like name, description, version, and tags.",{"category":45,"check":167,"severity":24,"summary":168},"Concise Body","The SKILL.md is reasonably concise, with detailed technical information and references delegated to separate files in the `references/` directory.",{"category":170,"check":171,"severity":24,"summary":172},"Context","Progressive Disclosure","The SKILL.md outlines key features and provides quick start guides, linking to detailed technical guides in the `references/` directory for advanced topics like multi-GPU deployment and optimization.",{"category":170,"check":174,"severity":42,"summary":175},"Forked exploration","This skill is focused on deployment and optimization of inference, not deep exploration or code review that would necessitate a forked context.",{"category":22,"check":177,"severity":24,"summary":178},"Usage examples","The SKILL.md and references include numerous ready-to-use code snippets and command-line examples for installation, basic inference, serving, and deployment.",{"category":22,"check":180,"severity":24,"summary":181},"Edge cases","The documentation addresses common failure modes like OOM errors and high latency, providing specific recovery steps and tuning advice.",{"category":111,"check":183,"severity":42,"summary":184},"Tool Fallback","This skill does not appear to rely on external MCP servers or tools that would require a fallback path.",{"category":92,"check":186,"severity":24,"summary":187},"Stack assumptions","The SKILL.md and Dockerfile clearly state stack assumptions like CUDA, Python versions, and required libraries, and provide installation instructions accordingly.",{"category":189,"check":190,"severity":24,"summary":191},"Safety","Halt on unexpected state","The documentation focuses on correct configuration and operational procedures, implying that deviations or errors would halt the process and require manual intervention or debugging.",{"category":92,"check":193,"severity":42,"summary":194},"Cross-skill coupling","This skill operates independently and does not implicitly or explicitly rely on other skills within the AI-Research-SKILLs library.",1778696685724,"This skill provides guidance and examples for leveraging NVIDIA TensorRT-LLM to optimize Large Language Model inference for maximum throughput and minimal latency on NVIDIA GPUs. It covers installation, model compilation, serving configurations, quantization (FP8, INT4), multi-GPU parallelism, and production deployment strategies.",[198,199,200,201,202],"Optimizes LLM inference with NVIDIA TensorRT-LLM","Achieves high throughput and low latency on NVIDIA GPUs","Supports production deployment scenarios","Demonstrates use of quantization (FP8, INT4)","Covers in-flight batching and multi-GPU scaling",[204,205,206,207],"Optimizing LLM inference on non-NVIDIA hardware","Providing a user-friendly Python-first API like vLLM","Edge deployment without NVIDIA GPUs","Using non-TensorRT quantization formats like GGUF","3.0.0","4.4.0","To enable users to achieve state-of-the-art performance for LLM inference in production environments by utilizing NVIDIA TensorRT-LLM's advanced optimization and serving capabilities.","The skill exhibits excellent documentation, clear purpose, and robust production readiness, with no critical or warning findings. Minor areas for potential improvement exist in explicit schema validation for tool parameters, but these do not detract from overall quality.",98,"Highly polished and production-ready skill for optimizing LLM inference using NVIDIA TensorRT-LLM.",[215,13,216,217,218,219,220,221,222,223],"inference-serving","nvidia","inference-optimization","high-throughput","low-latency","production","fp8","int4","multi-gpu","global","verified",[227,228,229,230],"Deploying LLMs in production on NVIDIA A100/H100 GPUs","Serving models requiring maximum throughput (e.g., 24,000+ tokens/sec)","Reducing inference latency for real-time applications","Utilizing quantized models (FP8/INT4) for memory and speed gains",{"codeQuality":232,"collectedAt":234,"documentation":235,"maintenance":238,"popularity":245,"security":246,"testCoverage":249},{"hasLockfile":233},true,1778696661029,{"descriptionLength":236,"readmeSize":237},293,45313,{"closedIssues90d":239,"forks":240,"hasChangelog":233,"manifestVersion":241,"openIssues90d":242,"pushedAt":243,"stars":244},8,640,"1.0.0",4,1777352967000,8343,{"npmDownloads":8},{"hasNpmPackage":233,"license":247,"smitheryVerified":248},"MIT",false,{"hasCi":233,"hasTests":248},{"updatedAt":251},1778696687593,{"basePath":253,"githubOwner":254,"githubRepo":255,"locale":18,"slug":13,"type":256},"12-inference-serving/tensorrt-llm","Orchestra-Research","AI-Research-SKILLs","skill",{"_creationTime":258,"_id":259,"community":260,"display":261,"identity":265,"parentExtension":268,"providers":301,"relations":311,"tags":312,"workflow":313},1778695116697.1702,"k17155ws9qc0hw7a568bg79sfd86max8",{"reviewCount":8},{"description":262,"installMethods":263,"name":264,"sourceUrl":14},"LLM architectures and implementations including LitGPT, Mamba, NanoGPT, RWKV, and TorchTitan. Use when implementing, training, or understanding transformer and alternative architectures.",{"claudeCode":255},"Agent-Native Research Artifact (ARA) Tooling",{"basePath":266,"githubOwner":254,"githubRepo":255,"locale":18,"slug":255,"type":267},"","plugin",{"_creationTime":269,"_id":270,"community":271,"display":272,"identity":276,"providers":278,"relations":295,"tags":297,"workflow":298},1778695116697.17,"k17755pkhk2ktxts0edcsj00s586nmvk",{"reviewCount":8},{"description":273,"installMethods":274,"name":275,"sourceUrl":14},"Comprehensive library of 98 AI research engineering skills enabling autonomous AI research from hypothesis to experimental verification",{"claudeCode":12},"AI Research Skills Library",{"basePath":266,"githubOwner":254,"githubRepo":255,"locale":18,"slug":255,"type":277},"marketplace",{"evaluate":279,"extract":288},{"promptVersionExtension":280,"promptVersionScoring":209,"score":281,"tags":282,"targetMarket":224,"tier":225},"3.1.0",99,[283,284,285,286,287],"ai-research","mlops","llm-skills","autonomous-agents","research-orchestration",{"commitSha":289,"license":247,"marketplace":290,"plugin":293},"HEAD",{"name":291,"pluginCount":292},"ai-research-skills",1,{"mcpCount":8,"provider":294,"skillCount":8},"classify",{"repoId":296},"kd70hj1y80mhra5xm5g188j5n586mg18",[283,286,285,284,287],{"evaluatedAt":299,"extractAt":300,"updatedAt":299},1778695131103,1778695116697,{"evaluate":302,"extract":308},{"promptVersionExtension":208,"promptVersionScoring":209,"score":212,"tags":303,"targetMarket":224,"tier":225},[304,305,306,307,283],"research","artifact","provenance","review",{"commitSha":289,"license":247,"plugin":309},{"mcpCount":8,"provider":294,"skillCount":310},96,{"parentExtensionId":270,"repoId":296},[283,305,306,304,307],{"evaluatedAt":314,"extractAt":300,"updatedAt":314},1778695555085,{"evaluate":316,"extract":318},{"promptVersionExtension":208,"promptVersionScoring":209,"score":212,"tags":317,"targetMarket":224,"tier":225},[215,13,216,217,218,219,220,221,222,223],{"commitSha":289},{"parentExtensionId":259,"repoId":296},{"_creationTime":321,"_id":296,"identity":322,"providers":323,"workflow":1856},1778695107142.3535,{"githubOwner":254,"githubRepo":255,"sourceUrl":14},{"classify":324,"discover":1834,"extract":1837,"github":1838,"npm":1855},{"commitSha":289,"extensions":325},[326,339,346,353,360,367,374,381,388,395,402,409,416,422,428,435,442,449,456,463,470,477,484,491,498,522,538,552,566,579,594,609,619,635,651,663,678,691,702,713,724,736,747,760,771,787,801,811,821,837,847,855,863,871,879,893,907,939,953,963,973,983,993,1003,1017,1028,1038,1050,1060,1079,1092,1107,1123,1137,1151,1161,1174,1187,1199,1211,1224,1243,1253,1266,1279,1292,1301,1311,1320,1330,1340,1352,1365,1378,1390,1400,1410,1420,1430,1440,1452,1461,1479,1495,1505,1515,1525,1535,1549,1562,1572,1585,1597,1611,1718,1728,1765,1773,1781,1795,1809,1819],{"basePath":266,"description":273,"displayName":291,"installMethods":327,"rationale":328,"selectedPaths":329,"source":338,"sourceLanguage":18,"type":277},{"claudeCode":12},"marketplace.json at .claude-plugin/marketplace.json",[330,333,335],{"path":331,"priority":332},".claude-plugin/marketplace.json","mandatory",{"path":334,"priority":332},"README.md",{"path":336,"priority":337},"LICENSE","high","rule",{"basePath":266,"description":262,"displayName":340,"installMethods":341,"rationale":342,"selectedPaths":343,"source":338,"sourceLanguage":18,"type":267},"model-architecture",{"claudeCode":255},"inline plugin source from marketplace.json at /",[344,345],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":347,"displayName":348,"installMethods":349,"rationale":342,"selectedPaths":350,"source":338,"sourceLanguage":18,"type":267},"Text tokenization for LLMs including HuggingFace Tokenizers and SentencePiece. Use when training custom tokenizers or handling multilingual text.","tokenization",{"claudeCode":255},[351,352],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":354,"displayName":355,"installMethods":356,"rationale":342,"selectedPaths":357,"source":338,"sourceLanguage":18,"type":267},"LLM fine-tuning frameworks including Axolotl, LLaMA-Factory, PEFT, and Unsloth. Use when fine-tuning models with LoRA, QLoRA, or full fine-tuning.","fine-tuning",{"claudeCode":255},[358,359],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":361,"displayName":362,"installMethods":363,"rationale":342,"selectedPaths":364,"source":338,"sourceLanguage":18,"type":267},"Neural network interpretability tools including TransformerLens, SAELens, NNSight, and pyvene. Use when analyzing model internals, finding circuits, or understanding how models compute.","mechanistic-interpretability",{"claudeCode":255},[365,366],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":368,"displayName":369,"installMethods":370,"rationale":342,"selectedPaths":371,"source":338,"sourceLanguage":18,"type":267},"Data curation and processing at scale including NeMo Curator and Ray Data. Use when preparing training datasets or processing large-scale data.","data-processing",{"claudeCode":255},[372,373],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":375,"displayName":376,"installMethods":377,"rationale":342,"selectedPaths":378,"source":338,"sourceLanguage":18,"type":267},"RLHF and preference alignment including TRL, GRPO, OpenRLHF, SimPO, verl, slime, miles, and torchforge. Use when aligning models with human preferences, training reward models, or large-scale RL training.","post-training",{"claudeCode":255},[379,380],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":382,"displayName":383,"installMethods":384,"rationale":342,"selectedPaths":385,"source":338,"sourceLanguage":18,"type":267},"AI safety and content moderation including Constitutional AI, LlamaGuard, NeMo Guardrails, and Prompt Guard. Use when implementing safety filters, content moderation, or prompt injection detection.","safety-alignment",{"claudeCode":255},[386,387],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":389,"displayName":390,"installMethods":391,"rationale":342,"selectedPaths":392,"source":338,"sourceLanguage":18,"type":267},"Multi-GPU and multi-node training including DeepSpeed, PyTorch FSDP, Accelerate, Megatron-Core, PyTorch Lightning, and Ray Train. Use when training large models across GPUs.","distributed-training",{"claudeCode":255},[393,394],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":396,"displayName":397,"installMethods":398,"rationale":342,"selectedPaths":399,"source":338,"sourceLanguage":18,"type":267},"GPU cloud and compute orchestration including Modal, Lambda Labs, and SkyPilot. Use when deploying training jobs or managing GPU resources.","infrastructure",{"claudeCode":255},[400,401],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":403,"displayName":404,"installMethods":405,"rationale":342,"selectedPaths":406,"source":338,"sourceLanguage":18,"type":267},"Model optimization and quantization including Flash Attention, bitsandbytes, GPTQ, AWQ, GGUF, and HQQ. Use when reducing memory, accelerating inference, or quantizing models.","optimization",{"claudeCode":255},[407,408],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":410,"displayName":411,"installMethods":412,"rationale":342,"selectedPaths":413,"source":338,"sourceLanguage":18,"type":267},"LLM benchmarking and evaluation including lm-evaluation-harness, BigCode Evaluation Harness, and NeMo Evaluator. Use when benchmarking models or measuring performance.","evaluation",{"claudeCode":255},[414,415],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":417,"displayName":215,"installMethods":418,"rationale":342,"selectedPaths":419,"source":338,"sourceLanguage":18,"type":267},"Production LLM inference including vLLM, TensorRT-LLM, llama.cpp, and SGLang. Use when deploying models for production inference.",{"claudeCode":255},[420,421],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":423,"displayName":284,"installMethods":424,"rationale":342,"selectedPaths":425,"source":338,"sourceLanguage":18,"type":267},"ML experiment tracking and lifecycle including Weights & Biases, MLflow, and TensorBoard. Use when tracking experiments or managing models.",{"claudeCode":255},[426,427],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":429,"displayName":430,"installMethods":431,"rationale":342,"selectedPaths":432,"source":338,"sourceLanguage":18,"type":267},"LLM agent frameworks including LangChain, LlamaIndex, CrewAI, and AutoGPT. Use when building chatbots, autonomous agents, or tool-using systems.","agents",{"claudeCode":255},[433,434],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":436,"displayName":437,"installMethods":438,"rationale":342,"selectedPaths":439,"source":338,"sourceLanguage":18,"type":267},"Retrieval-Augmented Generation including Chroma, FAISS, Pinecone, Qdrant, and Sentence Transformers. Use when building semantic search or document retrieval systems.","rag",{"claudeCode":255},[440,441],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":443,"displayName":444,"installMethods":445,"rationale":342,"selectedPaths":446,"source":338,"sourceLanguage":18,"type":267},"Structured LLM outputs including DSPy, Instructor, Guidance, and Outlines. Use when extracting structured data or constraining LLM outputs.","prompt-engineering",{"claudeCode":255},[447,448],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":450,"displayName":451,"installMethods":452,"rationale":342,"selectedPaths":453,"source":338,"sourceLanguage":18,"type":267},"LLM application monitoring including LangSmith and Phoenix. Use when debugging LLM apps or monitoring production systems.","observability",{"claudeCode":255},[454,455],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":457,"displayName":458,"installMethods":459,"rationale":342,"selectedPaths":460,"source":338,"sourceLanguage":18,"type":267},"Vision, audio, and multimodal models including CLIP, Whisper, LLaVA, BLIP-2, Segment Anything, Stable Diffusion, AudioCraft, Cosmos Policy, OpenPI, and OpenVLA-OFT. Use when working with images, audio, multimodal tasks, or vision-language-action robot policies.","multimodal",{"claudeCode":255},[461,462],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":464,"displayName":465,"installMethods":466,"rationale":342,"selectedPaths":467,"source":338,"sourceLanguage":18,"type":267},"Advanced ML techniques including MoE Training, Model Merging, Long Context, Speculative Decoding, Knowledge Distillation, and Model Pruning. Use when implementing cutting-edge optimization or architecture techniques.","emerging-techniques",{"claudeCode":255},[468,469],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":471,"displayName":472,"installMethods":473,"rationale":342,"selectedPaths":474,"source":338,"sourceLanguage":18,"type":267},"Autonomous research orchestration using a two-loop architecture. Manages the full research lifecycle from literature survey to paper writing, routing to domain-specific skills for execution. Use when starting a research project, running autonomous experiments, or managing multi-hypothesis research.","autoresearch",{"claudeCode":255},[475,476],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":478,"displayName":479,"installMethods":480,"rationale":342,"selectedPaths":481,"source":338,"sourceLanguage":18,"type":267},"Write publication-ready ML/AI/Systems papers for NeurIPS, ICML, ICLR, ACL, AAAI, COLM, OSDI, NSDI, ASPLOS, SOSP. Includes LaTeX templates, citation verification, reviewer guidelines, publication-quality figure generation, systems paper structural blueprints, and conference presentation slides.","ml-paper-writing",{"claudeCode":255},[482,483],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":485,"displayName":486,"installMethods":487,"rationale":342,"selectedPaths":488,"source":338,"sourceLanguage":18,"type":267},"Research ideation frameworks including structured brainstorming and creative thinking. Use when exploring new research directions, generating novel ideas, or seeking fresh angles on existing work.","ideation",{"claudeCode":255},[489,490],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":266,"description":492,"displayName":493,"installMethods":494,"rationale":342,"selectedPaths":495,"source":338,"sourceLanguage":18,"type":267},"Agent-Native Research Artifact (ARA) tooling: compile any research input (paper, repo, notes) into a structured artifact, record session provenance as a post-task epilogue, and run Seal Level 2 epistemic review. Use when ingesting research into a falsifiable, agent-traversable artifact, capturing how a research project actually evolved, or auditing an ARA for evidence-claim alignment.","agent-native-research-artifact",{"claudeCode":255},[496,497],{"path":334,"priority":332},{"path":336,"priority":337},{"basePath":499,"description":500,"displayName":472,"installMethods":501,"rationale":502,"selectedPaths":503,"source":338,"sourceLanguage":18,"type":256},"0-autoresearch-skill","Orchestrates end-to-end autonomous AI research projects using a two-loop architecture. The inner loop runs rapid experiment iterations with clear optimization targets. The outer loop synthesizes results, identifies patterns, and steers research direction. Routes to domain-specific skills for execution, supports continuous agent operation via Claude Code /loop and OpenClaw heartbeat, and produces research presentations and papers. Use when starting a research project, running autonomous experiments, or managing a multi-hypothesis research effort.",{"claudeCode":12},"SKILL.md frontmatter at 0-autoresearch-skill/SKILL.md",[504,506,509,511,513,516,518,520],{"path":505,"priority":332},"SKILL.md",{"path":507,"priority":508},"references/agent-continuity.md","medium",{"path":510,"priority":508},"references/progress-reporting.md",{"path":512,"priority":508},"references/skill-routing.md",{"path":514,"priority":515},"templates/findings.md","low",{"path":517,"priority":515},"templates/progress-presentation.html",{"path":519,"priority":515},"templates/research-log.md",{"path":521,"priority":515},"templates/research-state.yaml",{"basePath":523,"description":524,"displayName":525,"installMethods":526,"rationale":527,"selectedPaths":528,"source":338,"sourceLanguage":18,"type":256},"01-model-architecture/litgpt","Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.","implementing-llms-litgpt",{"claudeCode":12},"SKILL.md frontmatter at 01-model-architecture/litgpt/SKILL.md",[529,530,532,534,536],{"path":505,"priority":332},{"path":531,"priority":508},"references/custom-models.md",{"path":533,"priority":508},"references/distributed-training.md",{"path":535,"priority":508},"references/supported-models.md",{"path":537,"priority":508},"references/training-recipes.md",{"basePath":539,"description":540,"displayName":541,"installMethods":542,"rationale":543,"selectedPaths":544,"source":338,"sourceLanguage":18,"type":256},"01-model-architecture/mamba","State-space model with O(n) complexity vs Transformers' O(n²). 5× faster inference, million-token sequences, no KV cache. Selective SSM with hardware-aware design. Mamba-1 (d_state=16) and Mamba-2 (d_state=128, multi-head). Models 130M-2.8B on HuggingFace.","mamba-architecture",{"claudeCode":12},"SKILL.md frontmatter at 01-model-architecture/mamba/SKILL.md",[545,546,548,550],{"path":505,"priority":332},{"path":547,"priority":508},"references/architecture-details.md",{"path":549,"priority":508},"references/benchmarks.md",{"path":551,"priority":508},"references/training-guide.md",{"basePath":553,"description":554,"displayName":555,"installMethods":556,"rationale":557,"selectedPaths":558,"source":338,"sourceLanguage":18,"type":256},"01-model-architecture/nanogpt","Educational GPT implementation in ~300 lines. Reproduces GPT-2 (124M) on OpenWebText. Clean, hackable code for learning transformers. By Andrej Karpathy. Perfect for understanding GPT architecture from scratch. Train on Shakespeare (CPU) or OpenWebText (multi-GPU).","nanogpt",{"claudeCode":12},"SKILL.md frontmatter at 01-model-architecture/nanogpt/SKILL.md",[559,560,562,564],{"path":505,"priority":332},{"path":561,"priority":508},"references/architecture.md",{"path":563,"priority":508},"references/data.md",{"path":565,"priority":508},"references/training.md",{"basePath":567,"description":568,"displayName":569,"installMethods":570,"rationale":571,"selectedPaths":572,"source":338,"sourceLanguage":18,"type":256},"01-model-architecture/rwkv","RNN+Transformer hybrid with O(n) inference. Linear time, infinite context, no KV cache. Train like GPT (parallel), infer like RNN (sequential). Linux Foundation AI project. Production at Windows, Office, NeMo. RWKV-7 (March 2025). Models up to 14B parameters.","rwkv-architecture",{"claudeCode":12},"SKILL.md frontmatter at 01-model-architecture/rwkv/SKILL.md",[573,574,575,577],{"path":505,"priority":332},{"path":547,"priority":508},{"path":576,"priority":508},"references/rwkv7.md",{"path":578,"priority":508},"references/state-management.md",{"basePath":580,"description":581,"displayName":582,"installMethods":583,"rationale":584,"selectedPaths":585,"source":338,"sourceLanguage":18,"type":256},"01-model-architecture/torchtitan","Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use when pretraining Llama 3.1, DeepSeek V3, or custom models at scale from 8 to 512+ GPUs with Float8, torch.compile, and distributed checkpointing.","distributed-llm-pretraining-torchtitan",{"claudeCode":12},"SKILL.md frontmatter at 01-model-architecture/torchtitan/SKILL.md",[586,587,589,590,592],{"path":505,"priority":332},{"path":588,"priority":508},"references/checkpoint.md",{"path":531,"priority":508},{"path":591,"priority":508},"references/float8.md",{"path":593,"priority":508},"references/fsdp.md",{"basePath":595,"description":596,"displayName":597,"installMethods":598,"rationale":599,"selectedPaths":600,"source":338,"sourceLanguage":18,"type":256},"02-tokenization/huggingface-tokenizers","Fast tokenizers optimized for research and production. Rust-based implementation tokenizes 1GB in \u003C20 seconds. Supports BPE, WordPiece, and Unigram algorithms. Train custom vocabularies, track alignments, handle padding/truncation. Integrates seamlessly with transformers. Use when you need high-performance tokenization or custom tokenizer training.","huggingface-tokenizers",{"claudeCode":12},"SKILL.md frontmatter at 02-tokenization/huggingface-tokenizers/SKILL.md",[601,602,604,606,608],{"path":505,"priority":332},{"path":603,"priority":508},"references/algorithms.md",{"path":605,"priority":508},"references/integration.md",{"path":607,"priority":508},"references/pipeline.md",{"path":565,"priority":508},{"basePath":610,"description":611,"displayName":612,"installMethods":613,"rationale":614,"selectedPaths":615,"source":338,"sourceLanguage":18,"type":256},"02-tokenization/sentencepiece","Language-independent tokenizer treating text as raw Unicode. Supports BPE and Unigram algorithms. Fast (50k sentences/sec), lightweight (6MB memory), deterministic vocabulary. Used by T5, ALBERT, XLNet, mBART. Train on raw text without pre-tokenization. Use when you need multilingual support, CJK languages, or reproducible tokenization.","sentencepiece",{"claudeCode":12},"SKILL.md frontmatter at 02-tokenization/sentencepiece/SKILL.md",[616,617,618],{"path":505,"priority":332},{"path":603,"priority":508},{"path":565,"priority":508},{"basePath":620,"description":621,"displayName":622,"installMethods":623,"rationale":624,"selectedPaths":625,"source":338,"sourceLanguage":18,"type":256},"03-fine-tuning/axolotl","Expert guidance for fine-tuning LLMs with Axolotl - YAML configs, 100+ models, LoRA/QLoRA, DPO/KTO/ORPO/GRPO, multimodal support","axolotl",{"claudeCode":12},"SKILL.md frontmatter at 03-fine-tuning/axolotl/SKILL.md",[626,627,629,631,633],{"path":505,"priority":332},{"path":628,"priority":508},"references/api.md",{"path":630,"priority":508},"references/dataset-formats.md",{"path":632,"priority":508},"references/index.md",{"path":634,"priority":508},"references/other.md",{"basePath":636,"description":637,"displayName":638,"installMethods":639,"rationale":640,"selectedPaths":641,"source":338,"sourceLanguage":18,"type":256},"03-fine-tuning/llama-factory","Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support","llama-factory",{"claudeCode":12},"SKILL.md frontmatter at 03-fine-tuning/llama-factory/SKILL.md",[642,643,645,647,649,650],{"path":505,"priority":332},{"path":644,"priority":508},"references/_images.md",{"path":646,"priority":508},"references/advanced.md",{"path":648,"priority":508},"references/getting_started.md",{"path":632,"priority":508},{"path":634,"priority":508},{"basePath":652,"description":653,"displayName":654,"installMethods":655,"rationale":656,"selectedPaths":657,"source":338,"sourceLanguage":18,"type":256},"03-fine-tuning/peft","Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train \u003C1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.","peft-fine-tuning",{"claudeCode":12},"SKILL.md frontmatter at 03-fine-tuning/peft/SKILL.md",[658,659,661],{"path":505,"priority":332},{"path":660,"priority":508},"references/advanced-usage.md",{"path":662,"priority":508},"references/troubleshooting.md",{"basePath":664,"description":665,"displayName":666,"installMethods":667,"rationale":668,"selectedPaths":669,"source":338,"sourceLanguage":18,"type":256},"03-fine-tuning/unsloth","Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization","unsloth",{"claudeCode":12},"SKILL.md frontmatter at 03-fine-tuning/unsloth/SKILL.md",[670,671,672,674,676],{"path":505,"priority":332},{"path":632,"priority":508},{"path":673,"priority":508},"references/llms-full.md",{"path":675,"priority":508},"references/llms-txt.md",{"path":677,"priority":508},"references/llms.md",{"basePath":679,"description":680,"displayName":681,"installMethods":682,"rationale":683,"selectedPaths":684,"source":338,"sourceLanguage":18,"type":256},"04-mechanistic-interpretability/nnsight","Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.","nnsight-remote-interpretability",{"claudeCode":12},"SKILL.md frontmatter at 04-mechanistic-interpretability/nnsight/SKILL.md",[685,686,688,689],{"path":505,"priority":332},{"path":687,"priority":508},"references/README.md",{"path":628,"priority":508},{"path":690,"priority":508},"references/tutorials.md",{"basePath":692,"description":693,"displayName":694,"installMethods":695,"rationale":696,"selectedPaths":697,"source":338,"sourceLanguage":18,"type":256},"04-mechanistic-interpretability/pyvene","Provides guidance for performing causal interventions on PyTorch models using pyvene's declarative intervention framework. Use when conducting causal tracing, activation patching, interchange intervention training, or testing causal hypotheses about model behavior.","pyvene-interventions",{"claudeCode":12},"SKILL.md frontmatter at 04-mechanistic-interpretability/pyvene/SKILL.md",[698,699,700,701],{"path":505,"priority":332},{"path":687,"priority":508},{"path":628,"priority":508},{"path":690,"priority":508},{"basePath":703,"description":704,"displayName":705,"installMethods":706,"rationale":707,"selectedPaths":708,"source":338,"sourceLanguage":18,"type":256},"04-mechanistic-interpretability/saelens","Provides guidance for training and analyzing Sparse Autoencoders (SAEs) using SAELens to decompose neural network activations into interpretable features. Use when discovering interpretable features, analyzing superposition, or studying monosemantic representations in language models.","sparse-autoencoder-training",{"claudeCode":12},"SKILL.md frontmatter at 04-mechanistic-interpretability/saelens/SKILL.md",[709,710,711,712],{"path":505,"priority":332},{"path":687,"priority":508},{"path":628,"priority":508},{"path":690,"priority":508},{"basePath":714,"description":715,"displayName":716,"installMethods":717,"rationale":718,"selectedPaths":719,"source":338,"sourceLanguage":18,"type":256},"04-mechanistic-interpretability/transformer-lens","Provides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate transformer internals via HookPoints and activation caching. Use when reverse-engineering model algorithms, studying attention patterns, or performing activation patching experiments.","transformer-lens-interpretability",{"claudeCode":12},"SKILL.md frontmatter at 04-mechanistic-interpretability/transformer-lens/SKILL.md",[720,721,722,723],{"path":505,"priority":332},{"path":687,"priority":508},{"path":628,"priority":508},{"path":690,"priority":508},{"basePath":725,"description":726,"displayName":727,"installMethods":728,"rationale":729,"selectedPaths":730,"source":338,"sourceLanguage":18,"type":256},"05-data-processing/nemo-curator","GPU-accelerated data curation for LLM training. Supports text/image/video/audio. Features fuzzy deduplication (16× faster), quality filtering (30+ heuristics), semantic deduplication, PII redaction, NSFW detection. Scales across GPUs with RAPIDS. Use for preparing high-quality training datasets, cleaning web data, or deduplicating large corpora.","nemo-curator",{"claudeCode":12},"SKILL.md frontmatter at 05-data-processing/nemo-curator/SKILL.md",[731,732,734],{"path":505,"priority":332},{"path":733,"priority":508},"references/deduplication.md",{"path":735,"priority":508},"references/filtering.md",{"basePath":737,"description":738,"displayName":739,"installMethods":740,"rationale":741,"selectedPaths":742,"source":338,"sourceLanguage":18,"type":256},"05-data-processing/ray-data","Scalable data processing for ML workloads. Streaming execution across CPU/GPU, supports Parquet/CSV/JSON/images. Integrates with Ray Train, PyTorch, TensorFlow. Scales from single machine to 100s of nodes. Use for batch inference, data preprocessing, multi-modal data loading, or distributed ETL pipelines.","ray-data",{"claudeCode":12},"SKILL.md frontmatter at 05-data-processing/ray-data/SKILL.md",[743,744,745],{"path":505,"priority":332},{"path":605,"priority":508},{"path":746,"priority":508},"references/transformations.md",{"basePath":748,"description":749,"displayName":750,"installMethods":751,"rationale":752,"selectedPaths":753,"source":338,"sourceLanguage":18,"type":256},"06-post-training/grpo-rl-training","Expert guidance for GRPO/RL fine-tuning with TRL for reasoning and task-specific model training","grpo-rl-training",{"claudeCode":12},"SKILL.md frontmatter at 06-post-training/grpo-rl-training/SKILL.md",[754,755,756,758],{"path":505,"priority":332},{"path":334,"priority":337},{"path":757,"priority":515},"examples/reward_functions_library.py",{"path":759,"priority":515},"templates/basic_grpo_training.py",{"basePath":761,"description":762,"displayName":763,"installMethods":764,"rationale":765,"selectedPaths":766,"source":338,"sourceLanguage":18,"type":256},"06-post-training/miles","Provides guidance for enterprise-grade RL training using miles, a production-ready fork of slime. Use when training large MoE models with FP8/INT4, needing train-inference alignment, or requiring speculative RL for maximum throughput.","miles-rl-training",{"claudeCode":12},"SKILL.md frontmatter at 06-post-training/miles/SKILL.md",[767,768,770],{"path":505,"priority":332},{"path":769,"priority":508},"references/api-reference.md",{"path":662,"priority":508},{"basePath":772,"description":773,"displayName":774,"installMethods":775,"rationale":776,"selectedPaths":777,"source":338,"sourceLanguage":18,"type":256},"06-post-training/openrlhf","High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models (7B-70B+). Built on Ray, vLLM, ZeRO-3. 2× faster than DeepSpeedChat with distributed architecture and GPU resource sharing.","openrlhf-training",{"claudeCode":12},"SKILL.md frontmatter at 06-post-training/openrlhf/SKILL.md",[778,779,781,783,785],{"path":505,"priority":332},{"path":780,"priority":508},"references/algorithm-comparison.md",{"path":782,"priority":508},"references/custom-rewards.md",{"path":784,"priority":508},"references/hybrid-engine.md",{"path":786,"priority":508},"references/multi-node-training.md",{"basePath":788,"description":789,"displayName":790,"installMethods":791,"rationale":792,"selectedPaths":793,"source":338,"sourceLanguage":18,"type":256},"06-post-training/simpo","Simple Preference Optimization for LLM alignment. Reference-free alternative to DPO with better performance (+6.4 points on AlpacaEval 2.0). No reference model needed, more efficient than DPO. Use for preference alignment when want simpler, faster training than DPO/PPO.","simpo-training",{"claudeCode":12},"SKILL.md frontmatter at 06-post-training/simpo/SKILL.md",[794,795,797,799],{"path":505,"priority":332},{"path":796,"priority":508},"references/datasets.md",{"path":798,"priority":508},"references/hyperparameters.md",{"path":800,"priority":508},"references/loss-functions.md",{"basePath":802,"description":803,"displayName":804,"installMethods":805,"rationale":806,"selectedPaths":807,"source":338,"sourceLanguage":18,"type":256},"06-post-training/slime","Provides guidance for LLM post-training with RL using slime, a Megatron+SGLang framework. Use when training GLM models, implementing custom data generation workflows, or needing tight Megatron-LM integration for RL scaling.","slime-rl-training",{"claudeCode":12},"SKILL.md frontmatter at 06-post-training/slime/SKILL.md",[808,809,810],{"path":505,"priority":332},{"path":769,"priority":508},{"path":662,"priority":508},{"basePath":812,"description":813,"displayName":814,"installMethods":815,"rationale":816,"selectedPaths":817,"source":338,"sourceLanguage":18,"type":256},"06-post-training/torchforge","Provides guidance for PyTorch-native agentic RL using torchforge, Meta's library separating infra from algorithms. Use when you want clean RL abstractions, easy algorithm experimentation, or scalable training with Monarch and TorchTitan.","torchforge-rl-training",{"claudeCode":12},"SKILL.md frontmatter at 06-post-training/torchforge/SKILL.md",[818,819,820],{"path":505,"priority":332},{"path":769,"priority":508},{"path":662,"priority":508},{"basePath":822,"description":823,"displayName":824,"installMethods":825,"rationale":826,"selectedPaths":827,"source":338,"sourceLanguage":18,"type":256},"06-post-training/trl-fine-tuning","Fine-tune LLMs using reinforcement learning with TRL - SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, and reward model training. Use when need RLHF, align model with preferences, or train from human feedback. Works with HuggingFace Transformers.","fine-tuning-with-trl",{"claudeCode":12},"SKILL.md frontmatter at 06-post-training/trl-fine-tuning/SKILL.md",[828,829,831,833,835],{"path":505,"priority":332},{"path":830,"priority":508},"references/dpo-variants.md",{"path":832,"priority":508},"references/online-rl.md",{"path":834,"priority":508},"references/reward-modeling.md",{"path":836,"priority":508},"references/sft-training.md",{"basePath":838,"description":839,"displayName":840,"installMethods":841,"rationale":842,"selectedPaths":843,"source":338,"sourceLanguage":18,"type":256},"06-post-training/verl","Provides guidance for training LLMs with reinforcement learning using verl (Volcano Engine RL). Use when implementing RLHF, GRPO, PPO, or other RL algorithms for LLM post-training at scale with flexible infrastructure backends.","verl-rl-training",{"claudeCode":12},"SKILL.md frontmatter at 06-post-training/verl/SKILL.md",[844,845,846],{"path":505,"priority":332},{"path":769,"priority":508},{"path":662,"priority":508},{"basePath":848,"description":849,"displayName":850,"installMethods":851,"rationale":852,"selectedPaths":853,"source":338,"sourceLanguage":18,"type":256},"07-safety-alignment/constitutional-ai","Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.","constitutional-ai",{"claudeCode":12},"SKILL.md frontmatter at 07-safety-alignment/constitutional-ai/SKILL.md",[854],{"path":505,"priority":332},{"basePath":856,"description":857,"displayName":858,"installMethods":859,"rationale":860,"selectedPaths":861,"source":338,"sourceLanguage":18,"type":256},"07-safety-alignment/llamaguard","Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.","llamaguard",{"claudeCode":12},"SKILL.md frontmatter at 07-safety-alignment/llamaguard/SKILL.md",[862],{"path":505,"priority":332},{"basePath":864,"description":865,"displayName":866,"installMethods":867,"rationale":868,"selectedPaths":869,"source":338,"sourceLanguage":18,"type":256},"07-safety-alignment/nemo-guardrails","NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation, fact-checking, hallucination detection, PII filtering, toxicity detection. Uses Colang 2.0 DSL for programmable rails. Production-ready, runs on T4 GPU.","nemo-guardrails",{"claudeCode":12},"SKILL.md frontmatter at 07-safety-alignment/nemo-guardrails/SKILL.md",[870],{"path":505,"priority":332},{"basePath":872,"description":873,"displayName":874,"installMethods":875,"rationale":876,"selectedPaths":877,"source":338,"sourceLanguage":18,"type":256},"07-safety-alignment/prompt-guard","Meta's 86M prompt injection and jailbreak detector. Filters malicious prompts and third-party data for LLM apps. 99%+ TPR, \u003C1% FPR. Fast (\u003C2ms GPU). Multilingual (8 languages). Deploy with HuggingFace or batch processing for RAG security.","prompt-guard",{"claudeCode":12},"SKILL.md frontmatter at 07-safety-alignment/prompt-guard/SKILL.md",[878],{"path":505,"priority":332},{"basePath":880,"description":881,"displayName":882,"installMethods":883,"rationale":884,"selectedPaths":885,"source":338,"sourceLanguage":18,"type":256},"08-distributed-training/accelerate","Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.","huggingface-accelerate",{"claudeCode":12},"SKILL.md frontmatter at 08-distributed-training/accelerate/SKILL.md",[886,887,889,891],{"path":505,"priority":332},{"path":888,"priority":508},"references/custom-plugins.md",{"path":890,"priority":508},"references/megatron-integration.md",{"path":892,"priority":508},"references/performance.md",{"basePath":894,"description":895,"displayName":896,"installMethods":897,"rationale":898,"selectedPaths":899,"source":338,"sourceLanguage":18,"type":256},"08-distributed-training/megatron-core","Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B parameters, need maximum GPU efficiency (47% MFU on H100), or require tensor/pipeline/sequence/context/expert parallelism. Production-ready framework used for Nemotron, LLaMA, DeepSeek.","training-llms-megatron",{"claudeCode":12},"SKILL.md frontmatter at 08-distributed-training/megatron-core/SKILL.md",[900,901,902,904,906],{"path":505,"priority":332},{"path":549,"priority":508},{"path":903,"priority":508},"references/parallelism-guide.md",{"path":905,"priority":508},"references/production-examples.md",{"path":537,"priority":508},{"basePath":908,"description":909,"displayName":910,"installMethods":911,"rationale":912,"selectedPaths":913,"source":338,"sourceLanguage":18,"type":256},"08-distributed-training/pytorch-fsdp2","Adds PyTorch FSDP2 (fully_shard) to training scripts with correct init, sharding, mixed precision/offload config, and distributed checkpointing. Use when models exceed single-GPU memory or when you need DTensor-based sharding with DeviceMesh.","pytorch-fsdp2",{"claudeCode":12},"SKILL.md frontmatter at 08-distributed-training/pytorch-fsdp2/SKILL.md",[914,915,917,919,921,923,925,927,929,931,933,935,937],{"path":505,"priority":332},{"path":916,"priority":508},"references/pytorch_dcp_async_recipe.md",{"path":918,"priority":508},"references/pytorch_dcp_overview.md",{"path":920,"priority":508},"references/pytorch_dcp_recipe.md",{"path":922,"priority":508},"references/pytorch_ddp_notes.md",{"path":924,"priority":508},"references/pytorch_device_mesh_tutorial.md",{"path":926,"priority":508},"references/pytorch_examples_fsdp2.md",{"path":928,"priority":508},"references/pytorch_fsdp1_api.md",{"path":930,"priority":508},"references/pytorch_fsdp2_tutorial.md",{"path":932,"priority":508},"references/pytorch_fully_shard_api.md",{"path":934,"priority":508},"references/pytorch_tp_tutorial.md",{"path":936,"priority":508},"references/ray_train_fsdp2_example.md",{"path":938,"priority":508},"references/torchtitan_fsdp_notes.md",{"basePath":940,"description":941,"displayName":942,"installMethods":943,"rationale":944,"selectedPaths":945,"source":338,"sourceLanguage":18,"type":256},"08-distributed-training/pytorch-lightning","High-level PyTorch framework with Trainer class, automatic distributed training (DDP/FSDP/DeepSpeed), callbacks system, and minimal boilerplate. Scales from laptop to supercomputer with same code. Use when you want clean training loops with built-in best practices.","pytorch-lightning",{"claudeCode":12},"SKILL.md frontmatter at 08-distributed-training/pytorch-lightning/SKILL.md",[946,947,949,951],{"path":505,"priority":332},{"path":948,"priority":508},"references/callbacks.md",{"path":950,"priority":508},"references/distributed.md",{"path":952,"priority":508},"references/hyperparameter-tuning.md",{"basePath":954,"description":955,"displayName":956,"installMethods":957,"rationale":958,"selectedPaths":959,"source":338,"sourceLanguage":18,"type":256},"08-distributed-training/ray-train","Distributed training orchestration across clusters. Scales PyTorch/TensorFlow/HuggingFace from laptop to 1000s of nodes. Built-in hyperparameter tuning with Ray Tune, fault tolerance, elastic scaling. Use when training massive models across multiple machines or running distributed hyperparameter sweeps.","ray-train",{"claudeCode":12},"SKILL.md frontmatter at 08-distributed-training/ray-train/SKILL.md",[960,961],{"path":505,"priority":332},{"path":962,"priority":508},"references/multi-node.md",{"basePath":964,"description":965,"displayName":966,"installMethods":967,"rationale":968,"selectedPaths":969,"source":338,"sourceLanguage":18,"type":256},"09-infrastructure/lambda-labs","Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persistent filesystems, or high-performance multi-node clusters for large-scale training.","lambda-labs-gpu-cloud",{"claudeCode":12},"SKILL.md frontmatter at 09-infrastructure/lambda-labs/SKILL.md",[970,971,972],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":974,"description":975,"displayName":976,"installMethods":977,"rationale":978,"selectedPaths":979,"source":338,"sourceLanguage":18,"type":256},"09-infrastructure/modal","Serverless GPU cloud platform for running ML workloads. Use when you need on-demand GPU access without infrastructure management, deploying ML models as APIs, or running batch jobs with automatic scaling.","modal-serverless-gpu",{"claudeCode":12},"SKILL.md frontmatter at 09-infrastructure/modal/SKILL.md",[980,981,982],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":984,"description":985,"displayName":986,"installMethods":987,"rationale":988,"selectedPaths":989,"source":338,"sourceLanguage":18,"type":256},"09-infrastructure/skypilot","Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds, leverage spot instances with auto-recovery, or optimize GPU costs across providers.","skypilot-multi-cloud-orchestration",{"claudeCode":12},"SKILL.md frontmatter at 09-infrastructure/skypilot/SKILL.md",[990,991,992],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":994,"description":995,"displayName":996,"installMethods":997,"rationale":998,"selectedPaths":999,"source":338,"sourceLanguage":18,"type":256},"10-optimization/awq","Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.","awq-quantization",{"claudeCode":12},"SKILL.md frontmatter at 10-optimization/awq/SKILL.md",[1000,1001,1002],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1004,"description":1005,"displayName":1006,"installMethods":1007,"rationale":1008,"selectedPaths":1009,"source":338,"sourceLanguage":18,"type":256},"10-optimization/bitsandbytes","Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models, or want faster inference. Supports INT8, NF4, FP4 formats, QLoRA training, and 8-bit optimizers. Works with HuggingFace Transformers.","quantizing-models-bitsandbytes",{"claudeCode":12},"SKILL.md frontmatter at 10-optimization/bitsandbytes/SKILL.md",[1010,1011,1013,1015],{"path":505,"priority":332},{"path":1012,"priority":508},"references/memory-optimization.md",{"path":1014,"priority":508},"references/qlora-training.md",{"path":1016,"priority":508},"references/quantization-formats.md",{"basePath":1018,"description":1019,"displayName":1020,"installMethods":1021,"rationale":1022,"selectedPaths":1023,"source":338,"sourceLanguage":18,"type":256},"10-optimization/flash-attention","Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention.","optimizing-attention-flash",{"claudeCode":12},"SKILL.md frontmatter at 10-optimization/flash-attention/SKILL.md",[1024,1025,1026],{"path":505,"priority":332},{"path":549,"priority":508},{"path":1027,"priority":508},"references/transformers-integration.md",{"basePath":1029,"description":1030,"displayName":1031,"installMethods":1032,"rationale":1033,"selectedPaths":1034,"source":338,"sourceLanguage":18,"type":256},"10-optimization/gguf","GGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when needing flexible quantization from 2-8 bit without GPU requirements.","gguf-quantization",{"claudeCode":12},"SKILL.md frontmatter at 10-optimization/gguf/SKILL.md",[1035,1036,1037],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1039,"description":1040,"displayName":1041,"installMethods":1042,"rationale":1043,"selectedPaths":1044,"source":338,"sourceLanguage":18,"type":256},"10-optimization/gptq","Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4× memory reduction with \u003C2% perplexity degradation, or for faster inference (3-4× speedup) vs FP16. Integrates with transformers and PEFT for QLoRA fine-tuning.","gptq",{"claudeCode":12},"SKILL.md frontmatter at 10-optimization/gptq/SKILL.md",[1045,1046,1048,1049],{"path":505,"priority":332},{"path":1047,"priority":508},"references/calibration.md",{"path":605,"priority":508},{"path":662,"priority":508},{"basePath":1051,"description":1052,"displayName":1053,"installMethods":1054,"rationale":1055,"selectedPaths":1056,"source":338,"sourceLanguage":18,"type":256},"10-optimization/hqq","Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datasets, for fast quantization workflows, or when deploying with vLLM or HuggingFace Transformers.","hqq-quantization",{"claudeCode":12},"SKILL.md frontmatter at 10-optimization/hqq/SKILL.md",[1057,1058,1059],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1061,"description":1062,"displayName":1063,"installMethods":1064,"rationale":1065,"selectedPaths":1066,"source":338,"sourceLanguage":18,"type":256},"10-optimization/ml-training-recipes","Battle-tested PyTorch training recipes for all domains — LLMs, vision, diffusion, medical imaging, protein/drug discovery, spatial omics, genomics. Covers training loops, optimizer selection (AdamW, Muon), LR scheduling, mixed precision, debugging, and systematic experimentation. Use when training or fine-tuning neural networks, debugging loss spikes or OOM, choosing architectures, or optimizing GPU throughput.","ml-training-recipes",{"claudeCode":12},"SKILL.md frontmatter at 10-optimization/ml-training-recipes/SKILL.md",[1067,1068,1069,1071,1073,1075,1077],{"path":505,"priority":332},{"path":561,"priority":508},{"path":1070,"priority":508},"references/biomedical.md",{"path":1072,"priority":508},"references/domain-specific.md",{"path":1074,"priority":508},"references/experiment-loop.md",{"path":1076,"priority":508},"references/optimizers.md",{"path":1078,"priority":508},"references/scaling-and-selection.md",{"basePath":1080,"description":1081,"displayName":1082,"installMethods":1083,"rationale":1084,"selectedPaths":1085,"source":338,"sourceLanguage":18,"type":256},"11-evaluation/bigcode-evaluation-harness","Evaluates code generation models across HumanEval, MBPP, MultiPL-E, and 15+ benchmarks with pass@k metrics. Use when benchmarking code models, comparing coding abilities, testing multi-language support, or measuring code generation quality. Industry standard from BigCode Project used by HuggingFace leaderboards.","evaluating-code-models",{"claudeCode":12},"SKILL.md frontmatter at 11-evaluation/bigcode-evaluation-harness/SKILL.md",[1086,1087,1088,1090],{"path":505,"priority":332},{"path":549,"priority":508},{"path":1089,"priority":508},"references/custom-tasks.md",{"path":1091,"priority":508},"references/issues.md",{"basePath":1093,"description":1094,"displayName":1095,"installMethods":1096,"rationale":1097,"selectedPaths":1098,"source":338,"sourceLanguage":18,"type":256},"11-evaluation/lm-evaluation-harness","Evaluates LLMs across 60+ academic benchmarks (MMLU, HumanEval, GSM8K, TruthfulQA, HellaSwag). Use when benchmarking model quality, comparing models, reporting academic results, or tracking training progress. Industry standard used by EleutherAI, HuggingFace, and major labs. Supports HuggingFace, vLLM, APIs.","evaluating-llms-harness",{"claudeCode":12},"SKILL.md frontmatter at 11-evaluation/lm-evaluation-harness/SKILL.md",[1099,1100,1102,1104,1105],{"path":505,"priority":332},{"path":1101,"priority":508},"references/api-evaluation.md",{"path":1103,"priority":508},"references/benchmark-guide.md",{"path":1089,"priority":508},{"path":1106,"priority":508},"references/distributed-eval.md",{"basePath":1108,"description":1109,"displayName":1110,"installMethods":1111,"rationale":1112,"selectedPaths":1113,"source":338,"sourceLanguage":18,"type":256},"11-evaluation/nemo-evaluator","Evaluates LLMs across 100+ benchmarks from 18+ harnesses (MMLU, HumanEval, GSM8K, safety, VLM) with multi-backend execution. Use when needing scalable evaluation on local Docker, Slurm HPC, or cloud platforms. NVIDIA's enterprise-grade platform with container-first architecture for reproducible benchmarking.","nemo-evaluator-sdk",{"claudeCode":12},"SKILL.md frontmatter at 11-evaluation/nemo-evaluator/SKILL.md",[1114,1115,1117,1119,1121],{"path":505,"priority":332},{"path":1116,"priority":508},"references/adapter-system.md",{"path":1118,"priority":508},"references/configuration.md",{"path":1120,"priority":508},"references/custom-benchmarks.md",{"path":1122,"priority":508},"references/execution-backends.md",{"basePath":1124,"description":1125,"displayName":1126,"installMethods":1127,"rationale":1128,"selectedPaths":1129,"source":338,"sourceLanguage":18,"type":256},"12-inference-serving/llama-cpp","Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU.","llama-cpp",{"claudeCode":12},"SKILL.md frontmatter at 12-inference-serving/llama-cpp/SKILL.md",[1130,1131,1133,1135],{"path":505,"priority":332},{"path":1132,"priority":508},"references/optimization.md",{"path":1134,"priority":508},"references/quantization.md",{"path":1136,"priority":508},"references/server.md",{"basePath":1138,"description":1139,"displayName":1140,"installMethods":1141,"rationale":1142,"selectedPaths":1143,"source":338,"sourceLanguage":18,"type":256},"12-inference-serving/sglang","Fast structured generation and serving for LLMs with RadixAttention prefix caching. Use for JSON/regex outputs, constrained decoding, agentic workflows with tool calls, or when you need 5× faster inference than vLLM with prefix sharing. Powers 300,000+ GPUs at xAI, AMD, NVIDIA, and LinkedIn.","sglang",{"claudeCode":12},"SKILL.md frontmatter at 12-inference-serving/sglang/SKILL.md",[1144,1145,1147,1149],{"path":505,"priority":332},{"path":1146,"priority":508},"references/deployment.md",{"path":1148,"priority":508},"references/radix-attention.md",{"path":1150,"priority":508},"references/structured-generation.md",{"basePath":253,"description":10,"displayName":13,"installMethods":1152,"rationale":1153,"selectedPaths":1154,"source":338,"sourceLanguage":18,"type":256},{"claudeCode":12},"SKILL.md frontmatter at 12-inference-serving/tensorrt-llm/SKILL.md",[1155,1156,1158,1159],{"path":505,"priority":332},{"path":1157,"priority":508},"references/multi-gpu.md",{"path":1132,"priority":508},{"path":1160,"priority":508},"references/serving.md",{"basePath":1162,"description":1163,"displayName":1164,"installMethods":1165,"rationale":1166,"selectedPaths":1167,"source":338,"sourceLanguage":18,"type":256},"12-inference-serving/vllm","Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.","serving-llms-vllm",{"claudeCode":12},"SKILL.md frontmatter at 12-inference-serving/vllm/SKILL.md",[1168,1169,1170,1171,1173],{"path":505,"priority":332},{"path":1132,"priority":508},{"path":1134,"priority":508},{"path":1172,"priority":508},"references/server-deployment.md",{"path":662,"priority":508},{"basePath":1175,"description":1176,"displayName":1177,"installMethods":1178,"rationale":1179,"selectedPaths":1180,"source":338,"sourceLanguage":18,"type":256},"13-mlops/mlflow","Track ML experiments, manage model registry with versioning, deploy models to production, and reproduce experiments with MLflow - framework-agnostic ML lifecycle platform","mlflow",{"claudeCode":12},"SKILL.md frontmatter at 13-mlops/mlflow/SKILL.md",[1181,1182,1183,1185],{"path":505,"priority":332},{"path":1146,"priority":508},{"path":1184,"priority":508},"references/model-registry.md",{"path":1186,"priority":508},"references/tracking.md",{"basePath":1188,"description":1189,"displayName":1190,"installMethods":1191,"rationale":1192,"selectedPaths":1193,"source":338,"sourceLanguage":18,"type":256},"13-mlops/swanlab","Provides guidance for experiment tracking with SwanLab. Use when you need open-source run tracking, local or self-hosted dashboards, and lightweight media logging for ML workflows.","experiment-tracking-swanlab",{"claudeCode":12},"SKILL.md frontmatter at 13-mlops/swanlab/SKILL.md",[1194,1195,1197],{"path":505,"priority":332},{"path":1196,"priority":508},"references/integrations.md",{"path":1198,"priority":508},"references/visualization.md",{"basePath":1200,"description":1201,"displayName":1202,"installMethods":1203,"rationale":1204,"selectedPaths":1205,"source":338,"sourceLanguage":18,"type":256},"13-mlops/tensorboard","Visualize training metrics, debug models with histograms, compare experiments, visualize model graphs, and profile performance with TensorBoard - Google's ML visualization toolkit","tensorboard",{"claudeCode":12},"SKILL.md frontmatter at 13-mlops/tensorboard/SKILL.md",[1206,1207,1208,1210],{"path":505,"priority":332},{"path":1196,"priority":508},{"path":1209,"priority":508},"references/profiling.md",{"path":1198,"priority":508},{"basePath":1212,"description":1213,"displayName":1214,"installMethods":1215,"rationale":1216,"selectedPaths":1217,"source":338,"sourceLanguage":18,"type":256},"13-mlops/weights-and-biases","Track ML experiments with automatic logging, visualize training in real-time, optimize hyperparameters with sweeps, and manage model registry with W&B - collaborative MLOps platform","weights-and-biases",{"claudeCode":12},"SKILL.md frontmatter at 13-mlops/weights-and-biases/SKILL.md",[1218,1219,1221,1222],{"path":505,"priority":332},{"path":1220,"priority":508},"references/artifacts.md",{"path":1196,"priority":508},{"path":1223,"priority":508},"references/sweeps.md",{"basePath":1225,"description":1226,"displayName":1227,"installMethods":1228,"rationale":1229,"selectedPaths":1230,"source":338,"sourceLanguage":18,"type":256},"14-agents/a-evolve","Provides guidance for automatically evolving and optimizing AI agents across any domain using LLM-driven evolution algorithms. Use when building self-improving agents, optimizing agent prompts and skills against benchmarks, or implementing automated agent evaluation loops.","evolving-ai-agents",{"claudeCode":12},"SKILL.md frontmatter at 14-agents/a-evolve/SKILL.md",[1231,1232,1233,1234,1235,1237,1239,1240,1242],{"path":505,"priority":332},{"path":687,"priority":508},{"path":628,"priority":508},{"path":561,"priority":508},{"path":1236,"priority":508},"references/design-patterns.md",{"path":1238,"priority":508},"references/examples.md",{"path":1091,"priority":508},{"path":1241,"priority":508},"references/releases.md",{"path":690,"priority":508},{"basePath":1244,"description":1245,"displayName":1246,"installMethods":1247,"rationale":1248,"selectedPaths":1249,"source":338,"sourceLanguage":18,"type":256},"14-agents/autogpt","Autonomous AI agent platform for building and deploying continuous agents. Use when creating visual workflow agents, deploying persistent autonomous agents, or building complex multi-step AI automation systems.","autogpt-agents",{"claudeCode":12},"SKILL.md frontmatter at 14-agents/autogpt/SKILL.md",[1250,1251,1252],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1254,"description":1255,"displayName":1256,"installMethods":1257,"rationale":1258,"selectedPaths":1259,"source":338,"sourceLanguage":18,"type":256},"14-agents/crewai","Multi-agent orchestration framework for autonomous AI collaboration. Use when building teams of specialized agents working together on complex tasks, when you need role-based agent collaboration with memory, or for production workflows requiring sequential/hierarchical execution. Built without LangChain dependencies for lean, fast execution.","crewai-multi-agent",{"claudeCode":12},"SKILL.md frontmatter at 14-agents/crewai/SKILL.md",[1260,1261,1263,1265],{"path":505,"priority":332},{"path":1262,"priority":508},"references/flows.md",{"path":1264,"priority":508},"references/tools.md",{"path":662,"priority":508},{"basePath":1267,"description":1268,"displayName":1269,"installMethods":1270,"rationale":1271,"selectedPaths":1272,"source":338,"sourceLanguage":18,"type":256},"14-agents/langchain","Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.","langchain",{"claudeCode":12},"SKILL.md frontmatter at 14-agents/langchain/SKILL.md",[1273,1274,1276,1277],{"path":505,"priority":332},{"path":1275,"priority":508},"references/agents.md",{"path":605,"priority":508},{"path":1278,"priority":508},"references/rag.md",{"basePath":1280,"description":1281,"displayName":1282,"installMethods":1283,"rationale":1284,"selectedPaths":1285,"source":338,"sourceLanguage":18,"type":256},"14-agents/llamaindex","Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying. Features vector indices, query engines, agents, and multi-modal support. Use for document Q&A, chatbots, knowledge retrieval, or building RAG pipelines. Best for data-centric LLM applications.","llamaindex",{"claudeCode":12},"SKILL.md frontmatter at 14-agents/llamaindex/SKILL.md",[1286,1287,1288,1290],{"path":505,"priority":332},{"path":1275,"priority":508},{"path":1289,"priority":508},"references/data_connectors.md",{"path":1291,"priority":508},"references/query_engines.md",{"basePath":1293,"description":1294,"displayName":1295,"installMethods":1296,"rationale":1297,"selectedPaths":1298,"source":338,"sourceLanguage":18,"type":256},"15-rag/chroma","Open-source embedding database for AI applications. Store embeddings and metadata, perform vector and full-text search, filter by metadata. Simple 4-function API. Scales from notebooks to production clusters. Use for semantic search, RAG applications, or document retrieval. Best for local development and open-source projects.","chroma",{"claudeCode":12},"SKILL.md frontmatter at 15-rag/chroma/SKILL.md",[1299,1300],{"path":505,"priority":332},{"path":605,"priority":508},{"basePath":1302,"description":1303,"displayName":1304,"installMethods":1305,"rationale":1306,"selectedPaths":1307,"source":338,"sourceLanguage":18,"type":256},"15-rag/faiss","Facebook's library for efficient similarity search and clustering of dense vectors. Supports billions of vectors, GPU acceleration, and various index types (Flat, IVF, HNSW). Use for fast k-NN search, large-scale vector retrieval, or when you need pure similarity search without metadata. Best for high-performance applications.","faiss",{"claudeCode":12},"SKILL.md frontmatter at 15-rag/faiss/SKILL.md",[1308,1309],{"path":505,"priority":332},{"path":1310,"priority":508},"references/index_types.md",{"basePath":1312,"description":1313,"displayName":1314,"installMethods":1315,"rationale":1316,"selectedPaths":1317,"source":338,"sourceLanguage":18,"type":256},"15-rag/pinecone","Managed vector database for production AI applications. Fully managed, auto-scaling, with hybrid search (dense + sparse), metadata filtering, and namespaces. Low latency (\u003C100ms p95). Use for production RAG, recommendation systems, or semantic search at scale. Best for serverless, managed infrastructure.","pinecone",{"claudeCode":12},"SKILL.md frontmatter at 15-rag/pinecone/SKILL.md",[1318,1319],{"path":505,"priority":332},{"path":1146,"priority":508},{"basePath":1321,"description":1322,"displayName":1323,"installMethods":1324,"rationale":1325,"selectedPaths":1326,"source":338,"sourceLanguage":18,"type":256},"15-rag/qdrant","High-performance vector similarity search engine for RAG and semantic search. Use when building production RAG systems requiring fast nearest neighbor search, hybrid search with filtering, or scalable vector storage with Rust-powered performance.","qdrant-vector-search",{"claudeCode":12},"SKILL.md frontmatter at 15-rag/qdrant/SKILL.md",[1327,1328,1329],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1331,"description":1332,"displayName":1333,"installMethods":1334,"rationale":1335,"selectedPaths":1336,"source":338,"sourceLanguage":18,"type":256},"15-rag/sentence-transformers","Framework for state-of-the-art sentence, text, and image embeddings. Provides 5000+ pre-trained models for semantic similarity, clustering, and retrieval. Supports multilingual, domain-specific, and multimodal models. Use for generating embeddings for RAG, semantic search, or similarity tasks. Best for production embedding generation.","sentence-transformers",{"claudeCode":12},"SKILL.md frontmatter at 15-rag/sentence-transformers/SKILL.md",[1337,1338],{"path":505,"priority":332},{"path":1339,"priority":508},"references/models.md",{"basePath":1341,"description":1342,"displayName":1343,"installMethods":1344,"rationale":1345,"selectedPaths":1346,"source":338,"sourceLanguage":18,"type":256},"16-prompt-engineering/dspy","Build complex AI systems with declarative programming, optimize prompts automatically, create modular RAG systems and agents with DSPy - Stanford NLP's framework for systematic LM programming","dspy",{"claudeCode":12},"SKILL.md frontmatter at 16-prompt-engineering/dspy/SKILL.md",[1347,1348,1349,1351],{"path":505,"priority":332},{"path":1238,"priority":508},{"path":1350,"priority":508},"references/modules.md",{"path":1076,"priority":508},{"basePath":1353,"description":1354,"displayName":1355,"installMethods":1356,"rationale":1357,"selectedPaths":1358,"source":338,"sourceLanguage":18,"type":256},"16-prompt-engineering/guidance","Control LLM output with regex and grammars, guarantee valid JSON/XML/code generation, enforce structured formats, and build multi-step workflows with Guidance - Microsoft Research's constrained generation framework","guidance",{"claudeCode":12},"SKILL.md frontmatter at 16-prompt-engineering/guidance/SKILL.md",[1359,1360,1362,1364],{"path":505,"priority":332},{"path":1361,"priority":508},"references/backends.md",{"path":1363,"priority":508},"references/constraints.md",{"path":1238,"priority":508},{"basePath":1366,"description":1367,"displayName":1368,"installMethods":1369,"rationale":1370,"selectedPaths":1371,"source":338,"sourceLanguage":18,"type":256},"16-prompt-engineering/instructor","Extract structured data from LLM responses with Pydantic validation, retry failed extractions automatically, parse complex JSON with type safety, and stream partial results with Instructor - battle-tested structured output library","instructor",{"claudeCode":12},"SKILL.md frontmatter at 16-prompt-engineering/instructor/SKILL.md",[1372,1373,1374,1376],{"path":505,"priority":332},{"path":1238,"priority":508},{"path":1375,"priority":508},"references/providers.md",{"path":1377,"priority":508},"references/validation.md",{"basePath":1379,"description":1380,"displayName":1381,"installMethods":1382,"rationale":1383,"selectedPaths":1384,"source":338,"sourceLanguage":18,"type":256},"16-prompt-engineering/outlines","Guarantee valid JSON/XML/code structure during generation, use Pydantic models for type-safe outputs, support local models (Transformers, vLLM), and maximize inference speed with Outlines - dottxt.ai's structured generation library","outlines",{"claudeCode":12},"SKILL.md frontmatter at 16-prompt-engineering/outlines/SKILL.md",[1385,1386,1387,1388],{"path":505,"priority":332},{"path":1361,"priority":508},{"path":1238,"priority":508},{"path":1389,"priority":508},"references/json_generation.md",{"basePath":1391,"description":1392,"displayName":1393,"installMethods":1394,"rationale":1395,"selectedPaths":1396,"source":338,"sourceLanguage":18,"type":256},"17-observability/langsmith","LLM observability platform for tracing, evaluation, and monitoring. Use when debugging LLM applications, evaluating model outputs against datasets, monitoring production systems, or building systematic testing pipelines for AI applications.","langsmith-observability",{"claudeCode":12},"SKILL.md frontmatter at 17-observability/langsmith/SKILL.md",[1397,1398,1399],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1401,"description":1402,"displayName":1403,"installMethods":1404,"rationale":1405,"selectedPaths":1406,"source":338,"sourceLanguage":18,"type":256},"17-observability/phoenix","Open-source AI observability platform for LLM tracing, evaluation, and monitoring. Use when debugging LLM applications with detailed traces, running evaluations on datasets, or monitoring production AI systems with real-time insights.","phoenix-observability",{"claudeCode":12},"SKILL.md frontmatter at 17-observability/phoenix/SKILL.md",[1407,1408,1409],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1411,"description":1412,"displayName":1413,"installMethods":1414,"rationale":1415,"selectedPaths":1416,"source":338,"sourceLanguage":18,"type":256},"18-multimodal/audiocraft","PyTorch library for audio generation including text-to-music (MusicGen) and text-to-sound (AudioGen). Use when you need to generate music from text descriptions, create sound effects, or perform melody-conditioned music generation.","audiocraft-audio-generation",{"claudeCode":12},"SKILL.md frontmatter at 18-multimodal/audiocraft/SKILL.md",[1417,1418,1419],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1421,"description":1422,"displayName":1423,"installMethods":1424,"rationale":1425,"selectedPaths":1426,"source":338,"sourceLanguage":18,"type":256},"18-multimodal/blip-2","Vision-language pre-training framework bridging frozen image encoders and LLMs. Use when you need image captioning, visual question answering, image-text retrieval, or multimodal chat with state-of-the-art zero-shot performance.","blip-2-vision-language",{"claudeCode":12},"SKILL.md frontmatter at 18-multimodal/blip-2/SKILL.md",[1427,1428,1429],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1431,"description":1432,"displayName":1433,"installMethods":1434,"rationale":1435,"selectedPaths":1436,"source":338,"sourceLanguage":18,"type":256},"18-multimodal/clip","OpenAI's model connecting vision and language. Enables zero-shot image classification, image-text matching, and cross-modal retrieval. Trained on 400M image-text pairs. Use for image search, content moderation, or vision-language tasks without fine-tuning. Best for general-purpose image understanding.","clip",{"claudeCode":12},"SKILL.md frontmatter at 18-multimodal/clip/SKILL.md",[1437,1438],{"path":505,"priority":332},{"path":1439,"priority":508},"references/applications.md",{"basePath":1441,"description":1442,"displayName":1443,"installMethods":1444,"rationale":1445,"selectedPaths":1446,"source":338,"sourceLanguage":18,"type":256},"18-multimodal/cosmos-policy","Evaluates NVIDIA Cosmos Policy on LIBERO and RoboCasa simulation environments. Use when setting up cosmos-policy for robot manipulation evaluation, running headless GPU evaluations with EGL rendering, or profiling inference latency on cluster or local GPU machines.","evaluating-cosmos-policy",{"claudeCode":12},"SKILL.md frontmatter at 18-multimodal/cosmos-policy/SKILL.md",[1447,1448,1450],{"path":505,"priority":332},{"path":1449,"priority":508},"references/libero-commands.md",{"path":1451,"priority":508},"references/robocasa-commands.md",{"basePath":1453,"description":1454,"displayName":1455,"installMethods":1456,"rationale":1457,"selectedPaths":1458,"source":338,"sourceLanguage":18,"type":256},"18-multimodal/llava","Large Language and Vision Assistant. Enables visual instruction tuning and image-based conversations. Combines CLIP vision encoder with Vicuna/LLaMA language models. Supports multi-turn image chat, visual question answering, and instruction following. Use for vision-language chatbots or image understanding tasks. Best for conversational image analysis.","llava",{"claudeCode":12},"SKILL.md frontmatter at 18-multimodal/llava/SKILL.md",[1459,1460],{"path":505,"priority":332},{"path":565,"priority":508},{"basePath":1462,"description":1463,"displayName":1464,"installMethods":1465,"rationale":1466,"selectedPaths":1467,"source":338,"sourceLanguage":18,"type":256},"18-multimodal/openpi","Fine-tune and serve Physical Intelligence OpenPI models (pi0, pi0-fast, pi0.5) using JAX or PyTorch backends for robot policy inference across ALOHA, DROID, and LIBERO environments. Use when adapting pi0 models to custom datasets, converting JAX checkpoints to PyTorch, running policy inference servers, or debugging norm stats and GPU memory issues.","fine-tuning-serving-openpi",{"claudeCode":12},"SKILL.md frontmatter at 18-multimodal/openpi/SKILL.md",[1468,1469,1471,1473,1475,1477],{"path":505,"priority":332},{"path":1470,"priority":508},"references/checkpoints-and-env-map.md",{"path":1472,"priority":508},"references/config-recipes.md",{"path":1474,"priority":508},"references/pytorch-gotchas.md",{"path":1476,"priority":508},"references/remote-client-pattern.md",{"path":1478,"priority":508},"references/training-debugging.md",{"basePath":1480,"description":1481,"displayName":1482,"installMethods":1483,"rationale":1484,"selectedPaths":1485,"source":338,"sourceLanguage":18,"type":256},"18-multimodal/openvla-oft","Fine-tunes and evaluates OpenVLA-OFT and OpenVLA-OFT+ policies for robot action generation with continuous action heads, LoRA adaptation, and FiLM conditioning on LIBERO simulation and ALOHA real-world setups. Use when reproducing OpenVLA-OFT paper results, training custom VLA action heads (L1 or diffusion), deploying server-client inference for ALOHA, or debugging normalization, LoRA merge, and cross-GPU issues.","fine-tuning-openvla-oft",{"claudeCode":12},"SKILL.md frontmatter at 18-multimodal/openvla-oft/SKILL.md",[1486,1487,1489,1491,1493],{"path":505,"priority":332},{"path":1488,"priority":508},"references/aloha-workflow.md",{"path":1490,"priority":508},"references/config-troubleshooting.md",{"path":1492,"priority":508},"references/libero-workflow.md",{"path":1494,"priority":508},"references/paper-and-checkpoints.md",{"basePath":1496,"description":1497,"displayName":1498,"installMethods":1499,"rationale":1500,"selectedPaths":1501,"source":338,"sourceLanguage":18,"type":256},"18-multimodal/segment-anything","Foundation model for image segmentation with zero-shot transfer. Use when you need to segment any object in images using points, boxes, or masks as prompts, or automatically generate all object masks in an image.","segment-anything-model",{"claudeCode":12},"SKILL.md frontmatter at 18-multimodal/segment-anything/SKILL.md",[1502,1503,1504],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1506,"description":1507,"displayName":1508,"installMethods":1509,"rationale":1510,"selectedPaths":1511,"source":338,"sourceLanguage":18,"type":256},"18-multimodal/stable-diffusion","State-of-the-art text-to-image generation with Stable Diffusion models via HuggingFace Diffusers. Use when generating images from text prompts, performing image-to-image translation, inpainting, or building custom diffusion pipelines.","stable-diffusion-image-generation",{"claudeCode":12},"SKILL.md frontmatter at 18-multimodal/stable-diffusion/SKILL.md",[1512,1513,1514],{"path":505,"priority":332},{"path":660,"priority":508},{"path":662,"priority":508},{"basePath":1516,"description":1517,"displayName":1518,"installMethods":1519,"rationale":1520,"selectedPaths":1521,"source":338,"sourceLanguage":18,"type":256},"18-multimodal/whisper","OpenAI's general-purpose speech recognition model. Supports 99 languages, transcription, translation to English, and language identification. Six model sizes from tiny (39M params) to large (1550M params). Use for speech-to-text, podcast transcription, or multilingual audio processing. Best for robust, multilingual ASR.","whisper",{"claudeCode":12},"SKILL.md frontmatter at 18-multimodal/whisper/SKILL.md",[1522,1523],{"path":505,"priority":332},{"path":1524,"priority":508},"references/languages.md",{"basePath":1526,"description":1527,"displayName":1528,"installMethods":1529,"rationale":1530,"selectedPaths":1531,"source":338,"sourceLanguage":18,"type":256},"19-emerging-techniques/knowledge-distillation","Compress large language models using knowledge distillation from teacher to student models. Use when deploying smaller models with retained performance, transferring GPT-4 capabilities to open-source models, or reducing inference costs. Covers temperature scaling, soft targets, reverse KLD, logit distillation, and MiniLLM training strategies.","knowledge-distillation",{"claudeCode":12},"SKILL.md frontmatter at 19-emerging-techniques/knowledge-distillation/SKILL.md",[1532,1533],{"path":505,"priority":332},{"path":1534,"priority":508},"references/minillm.md",{"basePath":1536,"description":1537,"displayName":1538,"installMethods":1539,"rationale":1540,"selectedPaths":1541,"source":338,"sourceLanguage":18,"type":256},"19-emerging-techniques/long-context","Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models beyond original context limits, or implementing efficient positional encodings. Covers rotary embeddings, attention biases, interpolation methods, and extrapolation strategies for LLMs.","long-context",{"claudeCode":12},"SKILL.md frontmatter at 19-emerging-techniques/long-context/SKILL.md",[1542,1543,1545,1547],{"path":505,"priority":332},{"path":1544,"priority":508},"references/extension_methods.md",{"path":1546,"priority":508},"references/fine_tuning.md",{"path":1548,"priority":508},"references/rope.md",{"basePath":1550,"description":1551,"displayName":1552,"installMethods":1553,"rationale":1554,"selectedPaths":1555,"source":338,"sourceLanguage":18,"type":256},"19-emerging-techniques/model-merging","Merge multiple fine-tuned models using mergekit to combine capabilities without retraining. Use when creating specialized models by blending domain-specific expertise (math + coding + chat), improving performance beyond single models, or experimenting rapidly with model variants. Covers SLERP, TIES-Merging, DARE, Task Arithmetic, linear merging, and production deployment strategies.","model-merging",{"claudeCode":12},"SKILL.md frontmatter at 19-emerging-techniques/model-merging/SKILL.md",[1556,1557,1559,1560],{"path":505,"priority":332},{"path":1558,"priority":508},"references/evaluation.md",{"path":1238,"priority":508},{"path":1561,"priority":508},"references/methods.md",{"basePath":1563,"description":1564,"displayName":1565,"installMethods":1566,"rationale":1567,"selectedPaths":1568,"source":338,"sourceLanguage":18,"type":256},"19-emerging-techniques/model-pruning","Reduce LLM size and accelerate inference using pruning techniques like Wanda and SparseGPT. Use when compressing models without retraining, achieving 50% sparsity with minimal accuracy loss, or enabling faster inference on hardware accelerators. Covers unstructured pruning, structured pruning, N:M sparsity, magnitude pruning, and one-shot methods.","model-pruning",{"claudeCode":12},"SKILL.md frontmatter at 19-emerging-techniques/model-pruning/SKILL.md",[1569,1570],{"path":505,"priority":332},{"path":1571,"priority":508},"references/wanda.md",{"basePath":1573,"description":1574,"displayName":1575,"installMethods":1576,"rationale":1577,"selectedPaths":1578,"source":338,"sourceLanguage":18,"type":256},"19-emerging-techniques/moe-training","Train Mixture of Experts (MoE) models using DeepSpeed or HuggingFace. Use when training large-scale models with limited compute (5× cost reduction vs dense models), implementing sparse architectures like Mixtral 8x7B or DeepSeek-V3, or scaling model capacity without proportional compute increase. Covers MoE architectures, routing mechanisms, load balancing, expert parallelism, and inference optimization.","moe-training",{"claudeCode":12},"SKILL.md frontmatter at 19-emerging-techniques/moe-training/SKILL.md",[1579,1580,1582,1584],{"path":505,"priority":332},{"path":1581,"priority":508},"references/architectures.md",{"path":1583,"priority":508},"references/inference.md",{"path":565,"priority":508},{"basePath":1586,"description":1587,"displayName":1588,"installMethods":1589,"rationale":1590,"selectedPaths":1591,"source":338,"sourceLanguage":18,"type":256},"19-emerging-techniques/speculative-decoding","Accelerate LLM inference using speculative decoding, Medusa multiple heads, and lookahead decoding techniques. Use when optimizing inference speed (1.5-3.6× speedup), reducing latency for real-time applications, or deploying models with limited compute. Covers draft models, tree-based attention, Jacobi iteration, parallel token generation, and production deployment strategies.","speculative-decoding",{"claudeCode":12},"SKILL.md frontmatter at 19-emerging-techniques/speculative-decoding/SKILL.md",[1592,1593,1595],{"path":505,"priority":332},{"path":1594,"priority":508},"references/lookahead.md",{"path":1596,"priority":508},"references/medusa.md",{"basePath":1598,"description":1599,"displayName":1600,"installMethods":1601,"rationale":1602,"selectedPaths":1603,"source":338,"sourceLanguage":18,"type":256},"20-ml-paper-writing/academic-plotting","Generates publication-quality figures for ML papers from research context. Given a paper section or description, extracts system components and relationships to generate architecture diagrams via Gemini. Given experiment results or data, auto-selects chart type and generates data-driven figures via matplotlib/seaborn. Use when creating any figure for a conference paper.","academic-plotting",{"claudeCode":12},"SKILL.md frontmatter at 20-ml-paper-writing/academic-plotting/SKILL.md",[1604,1605,1607,1609],{"path":505,"priority":332},{"path":1606,"priority":508},"references/data-visualization.md",{"path":1608,"priority":508},"references/diagram-generation.md",{"path":1610,"priority":508},"references/style-guide.md",{"basePath":1612,"description":1613,"displayName":479,"installMethods":1614,"rationale":1615,"selectedPaths":1616,"source":338,"sourceLanguage":18,"type":256},"20-ml-paper-writing/ml-paper-writing","Write publication-ready ML/AI papers for NeurIPS, ICML, ICLR, ACL, AAAI, COLM. Use when drafting papers from research repos, structuring arguments, verifying citations, or preparing camera-ready submissions. For systems venues (OSDI, NSDI, ASPLOS, SOSP), use systems-paper-writing instead.",{"claudeCode":12},"SKILL.md frontmatter at 20-ml-paper-writing/ml-paper-writing/SKILL.md",[1617,1618,1620,1622,1624,1626,1628,1630,1632,1634,1636,1638,1640,1642,1644,1646,1648,1650,1652,1654,1656,1658,1660,1662,1664,1666,1668,1670,1672,1674,1676,1678,1680,1682,1684,1686,1688,1690,1692,1694,1696,1698,1700,1702,1704,1706,1708,1710,1712,1714,1716],{"path":505,"priority":332},{"path":1619,"priority":508},"references/checklists.md",{"path":1621,"priority":508},"references/citation-workflow.md",{"path":1623,"priority":508},"references/reviewer-guidelines.md",{"path":1625,"priority":508},"references/sources.md",{"path":1627,"priority":508},"references/writing-guide.md",{"path":1629,"priority":515},"templates/README.md",{"path":1631,"priority":515},"templates/aaai2026/README.md",{"path":1633,"priority":515},"templates/aaai2026/aaai2026-unified-supp.tex",{"path":1635,"priority":515},"templates/aaai2026/aaai2026-unified-template.tex",{"path":1637,"priority":515},"templates/aaai2026/aaai2026.bib",{"path":1639,"priority":515},"templates/aaai2026/aaai2026.bst",{"path":1641,"priority":515},"templates/aaai2026/aaai2026.sty",{"path":1643,"priority":515},"templates/acl/README.md",{"path":1645,"priority":515},"templates/acl/acl.sty",{"path":1647,"priority":515},"templates/acl/acl_latex.tex",{"path":1649,"priority":515},"templates/acl/acl_lualatex.tex",{"path":1651,"priority":515},"templates/acl/acl_natbib.bst",{"path":1653,"priority":515},"templates/acl/anthology.bib.txt",{"path":1655,"priority":515},"templates/acl/custom.bib",{"path":1657,"priority":515},"templates/acl/formatting.md",{"path":1659,"priority":515},"templates/colm2025/README.md",{"path":1661,"priority":515},"templates/colm2025/colm2025_conference.bib",{"path":1663,"priority":515},"templates/colm2025/colm2025_conference.bst",{"path":1665,"priority":515},"templates/colm2025/colm2025_conference.pdf",{"path":1667,"priority":515},"templates/colm2025/colm2025_conference.sty",{"path":1669,"priority":515},"templates/colm2025/colm2025_conference.tex",{"path":1671,"priority":515},"templates/colm2025/fancyhdr.sty",{"path":1673,"priority":515},"templates/colm2025/math_commands.tex",{"path":1675,"priority":515},"templates/colm2025/natbib.sty",{"path":1677,"priority":515},"templates/iclr2026/fancyhdr.sty",{"path":1679,"priority":515},"templates/iclr2026/iclr2026_conference.bib",{"path":1681,"priority":515},"templates/iclr2026/iclr2026_conference.bst",{"path":1683,"priority":515},"templates/iclr2026/iclr2026_conference.pdf",{"path":1685,"priority":515},"templates/iclr2026/iclr2026_conference.sty",{"path":1687,"priority":515},"templates/iclr2026/iclr2026_conference.tex",{"path":1689,"priority":515},"templates/iclr2026/math_commands.tex",{"path":1691,"priority":515},"templates/iclr2026/natbib.sty",{"path":1693,"priority":515},"templates/icml2026/algorithm.sty",{"path":1695,"priority":515},"templates/icml2026/algorithmic.sty",{"path":1697,"priority":515},"templates/icml2026/example_paper.bib",{"path":1699,"priority":515},"templates/icml2026/example_paper.pdf",{"path":1701,"priority":515},"templates/icml2026/example_paper.tex",{"path":1703,"priority":515},"templates/icml2026/fancyhdr.sty",{"path":1705,"priority":515},"templates/icml2026/icml2026.bst",{"path":1707,"priority":515},"templates/icml2026/icml2026.sty",{"path":1709,"priority":515},"templates/icml2026/icml_numpapers.pdf",{"path":1711,"priority":515},"templates/neurips2025/Makefile",{"path":1713,"priority":515},"templates/neurips2025/extra_pkgs.tex",{"path":1715,"priority":515},"templates/neurips2025/main.tex",{"path":1717,"priority":515},"templates/neurips2025/neurips.sty",{"basePath":1719,"description":1720,"displayName":1721,"installMethods":1722,"rationale":1723,"selectedPaths":1724,"source":338,"sourceLanguage":18,"type":256},"20-ml-paper-writing/presenting-conference-talks","Generates conference presentation slides (Beamer LaTeX PDF and editable PPTX) from a compiled paper with speaker notes and talk script. Use when preparing oral talks, spotlight presentations, or invited talks for ML and systems conferences.","presenting-conference-talks",{"claudeCode":12},"SKILL.md frontmatter at 20-ml-paper-writing/presenting-conference-talks/SKILL.md",[1725,1726],{"path":505,"priority":332},{"path":1727,"priority":508},"references/slide-templates.md",{"basePath":1729,"description":1730,"displayName":1731,"installMethods":1732,"rationale":1733,"selectedPaths":1734,"source":338,"sourceLanguage":18,"type":256},"20-ml-paper-writing/systems-paper-writing","Comprehensive guide for writing systems papers targeting OSDI, SOSP, ASPLOS, NSDI, and EuroSys. Provides paragraph-level structural blueprints, writing patterns, venue-specific checklists, reviewer guidelines, LaTeX templates, and conference deadlines. Use this skill for all systems conference paper writing.","systems-paper-writing",{"claudeCode":12},"SKILL.md frontmatter at 20-ml-paper-writing/systems-paper-writing/SKILL.md",[1735,1736,1738,1739,1741,1743,1745,1747,1749,1751,1753,1755,1757,1759,1761,1763],{"path":505,"priority":332},{"path":1737,"priority":508},"references/checklist.md",{"path":1623,"priority":508},{"path":1740,"priority":508},"references/section-blueprints.md",{"path":1742,"priority":508},"references/systems-conferences.md",{"path":1744,"priority":508},"references/writing-patterns.md",{"path":1746,"priority":515},"templates/asplos2027/main.tex",{"path":1748,"priority":515},"templates/asplos2027/references.bib",{"path":1750,"priority":515},"templates/nsdi2027/main.tex",{"path":1752,"priority":515},"templates/nsdi2027/references.bib",{"path":1754,"priority":515},"templates/nsdi2027/usenix-2020-09.sty",{"path":1756,"priority":515},"templates/osdi2026/main.tex",{"path":1758,"priority":515},"templates/osdi2026/references.bib",{"path":1760,"priority":515},"templates/osdi2026/usenix-2020-09.sty",{"path":1762,"priority":515},"templates/sosp2026/main.tex",{"path":1764,"priority":515},"templates/sosp2026/references.bib",{"basePath":1766,"description":1767,"displayName":1768,"installMethods":1769,"rationale":1770,"selectedPaths":1771,"source":338,"sourceLanguage":18,"type":256},"21-research-ideation/brainstorming-research-ideas","Guides researchers through structured ideation frameworks to discover high-impact research directions. Use when exploring new problem spaces, pivoting between projects, or seeking novel angles on existing work.","brainstorming-research-ideas",{"claudeCode":12},"SKILL.md frontmatter at 21-research-ideation/brainstorming-research-ideas/SKILL.md",[1772],{"path":505,"priority":332},{"basePath":1774,"description":1775,"displayName":1776,"installMethods":1777,"rationale":1778,"selectedPaths":1779,"source":338,"sourceLanguage":18,"type":256},"21-research-ideation/creative-thinking-for-research","Applies cognitive science frameworks for creative thinking to CS and AI research ideation. Use when seeking genuinely novel research directions by leveraging combinatorial creativity, analogical reasoning, constraint manipulation, and other empirically grounded creative strategies.","creative-thinking-for-research",{"claudeCode":12},"SKILL.md frontmatter at 21-research-ideation/creative-thinking-for-research/SKILL.md",[1780],{"path":505,"priority":332},{"basePath":1782,"description":1783,"displayName":1784,"installMethods":1785,"rationale":1786,"selectedPaths":1787,"source":338,"sourceLanguage":18,"type":256},"22-agent-native-research-artifact/compiler","Compiles any research input — PDF papers, GitHub repositories, experiment logs, code directories, or raw notes — into a complete Agent-Native Research Artifact (ARA) with cognitive layer (claims, concepts, heuristics), physical layer (configs, code stubs), exploration graph, and grounded evidence. Use when ingesting a paper or codebase into a structured, machine-executable knowledge package, building an ARA from scratch, or converting research outputs into a falsifiable, agent-traversable form.","ara-compiler",{"claudeCode":12},"SKILL.md frontmatter at 22-agent-native-research-artifact/compiler/SKILL.md",[1788,1789,1791,1793],{"path":505,"priority":332},{"path":1790,"priority":508},"references/ara-schema.md",{"path":1792,"priority":508},"references/exploration-tree-spec.md",{"path":1794,"priority":508},"references/validation-checklist.md",{"basePath":1796,"description":1797,"displayName":1798,"installMethods":1799,"rationale":1800,"selectedPaths":1801,"source":338,"sourceLanguage":18,"type":256},"22-agent-native-research-artifact/research-manager","Records research provenance as a post-task epilogue, scanning conversation history at the end of a coding or research session to extract decisions, experiments, dead ends, claims, heuristics, and pivots, and writing them into the ara/ directory with user-vs-AI provenance tags. Use as a session epilogue — never during execution — to maintain a faithful, auditable trace of how a research project actually evolved.","ara-research-manager",{"claudeCode":12},"SKILL.md frontmatter at 22-agent-native-research-artifact/research-manager/SKILL.md",[1802,1803,1805,1807],{"path":505,"priority":332},{"path":1804,"priority":508},"references/event-taxonomy.md",{"path":1806,"priority":508},"references/provenance-tags.md",{"path":1808,"priority":508},"references/session-protocol.md",{"basePath":1810,"description":1811,"displayName":1812,"installMethods":1813,"rationale":1814,"selectedPaths":1815,"source":338,"sourceLanguage":18,"type":256},"22-agent-native-research-artifact/rigor-reviewer","Performs ARA Seal Level 2 semantic epistemic review on Agent-Native Research Artifacts, scoring six dimensions (evidence relevance, falsifiability, scope calibration, argument coherence, exploration integrity, methodological rigor) and producing a constructive, severity-ranked report with a Strong Accept-to-Reject recommendation. Use after Level 1 structural validation passes, when an ARA needs an objective epistemic critique before publication or release.","ara-rigor-reviewer",{"claudeCode":12},"SKILL.md frontmatter at 22-agent-native-research-artifact/rigor-reviewer/SKILL.md",[1816,1817],{"path":505,"priority":332},{"path":1818,"priority":508},"references/review-dimensions.md",{"basePath":1820,"description":1821,"displayName":1822,"installMethods":1823,"license":247,"rationale":1824,"selectedPaths":1825,"source":338,"sourceLanguage":18,"type":1833},"packages/ai-research-skills","Install AI research engineering skills to your coding agents (Claude Code, OpenCode, Cursor, Gemini CLI, Hermes Agent, and more)","@orchestra-research/ai-research-skills",{"npm":1822},"cli ecosystem detected at packages/ai-research-skills",[1826,1828,1829,1831],{"path":1827,"priority":332},"package.json",{"path":334,"priority":332},{"path":1830,"priority":508},"bin/cli.js",{"path":1832,"priority":515},"src/index.js","cli",{"sources":1835},[1836],"manual",{"npmPackage":291},{"closedIssues90d":239,"description":1839,"forks":240,"homepage":1840,"license":247,"openIssues90d":242,"pushedAt":243,"readmeSize":237,"stars":244,"topics":1841},"Comprehensive open-source library of AI research and engineering skills for any AI model. Package the skills and your claude code/codex/gemini agent will be an AI research agent with full horsepower. Maintained by Orchestra Research.","http://orchestra-research.com",[1842,283,1843,1844,1845,1846,1847,1848,1849,1850,1851,1852,1853,1854],"ai","claude","claude-code","claude-skills","codex","gemini","gpt-5","grpo","huggingface","machine-leanring","megatron","skills","vllm",{"downloads":8},{"classifiedAt":1857,"discoverAt":1858,"extractAt":1859,"githubAt":1859,"npmAt":1860,"updatedAt":1857},1778695115942,1778695107142,1778695112108,1778695113836,[221,218,217,215,222,219,223,216,220,13],{"evaluatedAt":251,"extractAt":300,"updatedAt":251},[],[1865,1891,1913,1931,1950,1978],{"_creationTime":1866,"_id":1867,"community":1868,"display":1869,"identity":1874,"providers":1879,"relations":1885,"tags":1887,"workflow":1888},1778685991755.7134,"k172nhjvr51qzvvv0zhcedj10586m02g",{"reviewCount":8},{"description":10,"installMethods":1870,"name":1872,"sourceUrl":1873},{"claudeCode":1871},"davila7/claude-code-templates","TensorRT-LLM Inference Serving","https://github.com/davila7/claude-code-templates",{"basePath":1875,"githubOwner":1876,"githubRepo":1877,"locale":18,"slug":1878,"type":256},"cli-tool/components/skills/ai-research/inference-serving-tensorrt-llm","davila7","claude-code-templates","inference-serving-tensorrt-llm",{"evaluate":1880,"extract":1884},{"promptVersionExtension":208,"promptVersionScoring":209,"score":281,"tags":1881,"targetMarket":224,"tier":225},[215,13,216,1882,404,1883],"gpu","llm",{"commitSha":289,"license":247},{"repoId":1886},"kd71fzn4s7r0269fkw47wt670n86ndz0",[1882,215,1883,216,404,13],{"evaluatedAt":1889,"extractAt":1890,"updatedAt":1889},1778687348408,1778685991755,{"_creationTime":1892,"_id":1893,"community":1894,"display":1895,"identity":1898,"providers":1900,"relations":1909,"tags":1910,"workflow":1911},1778695116697.1804,"k17f20kafjvhnkrnwhfcqczs3n86m171",{"reviewCount":8},{"description":762,"installMethods":1896,"name":1897,"sourceUrl":14},{"claudeCode":12},"miles RL Training",{"basePath":761,"githubOwner":254,"githubRepo":255,"locale":18,"slug":1899,"type":256},"miles",{"evaluate":1901,"extract":1908},{"promptVersionExtension":208,"promptVersionScoring":209,"score":1902,"tags":1903,"targetMarket":224,"tier":225},97,[1904,1905,221,222,1906,1140,1907],"reinforcement-learning","moe","enterprise","megatron-lm",{"commitSha":289,"license":247},{"parentExtensionId":259,"repoId":296},[1906,221,222,1907,1905,1904,1140],{"evaluatedAt":1912,"extractAt":300,"updatedAt":1912},1778695991660,{"_creationTime":1914,"_id":1915,"community":1916,"display":1917,"identity":1920,"providers":1921,"relations":1927,"tags":1928,"workflow":1929},1778695116697.1882,"k171v3d0cnswa9b5090933zp1n86mxnp",{"reviewCount":8},{"description":1163,"installMethods":1918,"name":1919,"sourceUrl":14},{"claudeCode":12},"vLLM - High-Performance LLM Serving",{"basePath":1162,"githubOwner":254,"githubRepo":255,"locale":18,"slug":1854,"type":256},{"evaluate":1922,"extract":1926},{"promptVersionExtension":208,"promptVersionScoring":209,"score":1902,"tags":1923,"targetMarket":224,"tier":225},[1854,215,1883,1924,1925,220],"api","quantization",{"commitSha":289,"license":247},{"parentExtensionId":259,"repoId":296},[1924,215,1883,220,1925,1854],{"evaluatedAt":1930,"extractAt":300,"updatedAt":1930},1778696705273,{"_creationTime":1932,"_id":1933,"community":1934,"display":1935,"identity":1937,"providers":1940,"relations":1946,"tags":1947,"workflow":1948},1778685991755.7244,"k17c363g8s57mc26k68rcv085n86mej0",{"reviewCount":8},{"description":762,"installMethods":1936,"name":763,"sourceUrl":1873},{"claudeCode":1871},{"basePath":1938,"githubOwner":1876,"githubRepo":1877,"locale":18,"slug":1939,"type":256},"cli-tool/components/skills/ai-research/post-training-miles","post-training-miles",{"evaluate":1941,"extract":1945},{"promptVersionExtension":208,"promptVersionScoring":209,"score":1942,"tags":1943,"targetMarket":224,"tier":1944},92,[1904,1905,221,222,1906,1140,1907],"community",{"commitSha":289},{"repoId":1886},[1906,221,222,1907,1905,1904,1140],{"evaluatedAt":1949,"extractAt":1890,"updatedAt":1949},1778688397786,{"_creationTime":1951,"_id":1952,"community":1953,"display":1954,"identity":1960,"providers":1963,"relations":1972,"tags":1974,"workflow":1975},1778696113180.8176,"k17cph4pqw00c8x14hpx2s42xd86m0c2",{"reviewCount":8},{"description":1955,"installMethods":1956,"name":1958,"sourceUrl":1959},"Manage active production incidents through detection, triage, mitigation, communication, and resolution with structured roles and decision-making. Use this skill whenever the user has an active incident, a production issue, a service outage, a security incident, or needs to plan incident response procedures. Triggers on incident response, production incident, outage, service down, site down, P0, P1, severity, downtime, on-call, incident commander, status page, postmortem prep. Also triggers when something is actively broken in production and the user is figuring out what to do.",{"claudeCode":1957},"rampstackco/claude-skills","incident-response","https://github.com/rampstackco/claude-skills",{"basePath":1961,"githubOwner":1962,"githubRepo":1845,"locale":18,"slug":1958,"type":256},"skills/incident-response","rampstackco",{"evaluate":1964,"extract":1971},{"promptVersionExtension":208,"promptVersionScoring":209,"score":1965,"tags":1966,"targetMarket":224,"tier":225},100,[1967,1968,1969,220,1970],"incident-management","operations","devops","on-call",{"commitSha":289},{"repoId":1973},"kd7bebccrrd1xf6w868aggftrd86m86v",[1969,1967,1970,1968,220],{"evaluatedAt":1976,"extractAt":1977,"updatedAt":1976},1778697024409,1778696113180,{"_creationTime":1979,"_id":1980,"community":1981,"display":1982,"identity":1988,"providers":1992,"relations":2001,"tags":2004,"workflow":2005},1778685615701.8425,"k1707ctze9p8fn1e339nd2czjn86m6p5",{"reviewCount":8},{"description":1983,"installMethods":1984,"name":1986,"sourceUrl":1987},"When the user wants to create, generate, or produce video content using AI tools or programmatic frameworks. Also use when the user mentions 'video production,' 'AI video,' 'Remotion,' 'Hyperframes,' 'HeyGen,' 'Synthesia,' 'Veo,' 'Runway,' 'Kling,' 'Pika,' 'video generation,' 'AI avatar,' 'talking head video,' 'programmatic video,' 'video template,' 'explainer video,' 'product demo video,' 'video pipeline,' or 'make me a video.' Use this for video creation, generation, and production workflows. For video content strategy and what to post, see social-content. For paid video ad creative, see ad-creative.",{"claudeCode":1985},"coreyhaines31/marketingskills","video","https://github.com/coreyhaines31/marketingskills",{"basePath":1989,"githubOwner":1990,"githubRepo":1991,"locale":18,"slug":1986,"type":256},"skills/video","coreyhaines31","marketingskills",{"evaluate":1993,"extract":2000},{"promptVersionExtension":208,"promptVersionScoring":209,"score":1965,"tags":1994,"targetMarket":224,"tier":225},[1986,220,1995,1996,1997,1998,1999],"ai-video","remotion","heygen","runway","content-creation",{"commitSha":289},{"parentExtensionId":2002,"repoId":2003},"k175jvka8cxxkf91gk8qy25r8186npjr","kd7a4vjty5ay3s25r82cm72wdn86nmg0",[1995,1999,1997,220,1996,1998,1986],{"evaluatedAt":2006,"extractAt":2007,"updatedAt":2006},1778686582142,1778685615701]