[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"extension-skill-huggingface-huggingface-llm-trainer-de":3,"guides-for-huggingface-huggingface-llm-trainer":739,"similar-k17aqa68b1vx1r0j9feqm2kggh86nv3v-de":740},{"_creationTime":4,"_id":5,"children":6,"community":7,"display":9,"evaluation":15,"identity":247,"isFallback":244,"parentExtension":252,"providers":286,"relations":290,"repo":291,"tags":737,"workflow":738},1778690773482.4878,"k17aqa68b1vx1r0j9feqm2kggh86nv3v",[],{"reviewCount":8},0,{"description":10,"installMethods":11,"name":13,"sourceUrl":14},"Train or fine-tune language and vision models using TRL (Transformer Reinforcement Learning) or Unsloth with Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, model selection/leaderboards and model persistence. Use for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.",{"claudeCode":12},"huggingface/skills","huggingface-llm-trainer","https://github.com/huggingface/skills",{"_creationTime":16,"_id":17,"extensionId":5,"locale":18,"result":19,"trustSignals":228,"workflow":245},1778691314030.4255,"kn7d48m44qqfe3e4dvvcayj92n86ny9h","en",{"checks":20,"evaluatedAt":194,"extensionSummary":195,"features":196,"nonGoals":202,"promptVersionExtension":206,"promptVersionScoring":207,"purpose":208,"rationale":209,"score":210,"summary":211,"tags":212,"targetMarket":221,"tier":222,"useCases":223},[21,26,29,32,36,39,44,48,51,54,58,62,65,69,72,75,78,81,84,87,90,94,98,102,106,109,112,115,119,122,125,128,131,134,137,141,145,149,152,156,159,162,165,168,172,175,178,181,184,187,191],{"category":22,"check":23,"severity":24,"summary":25},"Practical Utility","Problem relevance","pass","The description clearly identifies the problem of training language and vision models using TRL or Unsloth on Hugging Face Jobs, and specifies the target user and scenarios.",{"category":22,"check":27,"severity":24,"summary":28},"Unique selling proposition","The extension offers significant value over basic prompting by abstracting the complexities of Hugging Face Jobs, TRL/Unsloth configurations, and GGUF conversion, providing a streamlined workflow.",{"category":22,"check":30,"severity":24,"summary":31},"Production readiness","The skill covers the complete lifecycle for training models on Hugging Face Jobs, including setup, execution, monitoring, cost estimation, and model conversion for deployment.",{"category":33,"check":34,"severity":24,"summary":35},"Scope","Single responsibility principle","The skill focuses on fine-tuning language and vision models using TRL and Unsloth on Hugging Face Jobs, encompassing related tasks like GGUF conversion and monitoring, all within a coherent domain.",{"category":33,"check":37,"severity":24,"summary":38},"Description quality","The displayed description is concise, readable, and accurately reflects the skill's capabilities, including training methods, infrastructure, and related utilities.",{"category":40,"check":41,"severity":42,"summary":43},"Invocation","Scoped tools","not_applicable","This is a skill-based extension, not a tool-based one, so the concept of scoped tools does not apply.",{"category":45,"check":46,"severity":24,"summary":47},"Documentation","Configuration & parameter reference","The SKILL.md file extensively documents all parameters, configurations, hardware selection, and best practices for using TRL, Unsloth, and Hugging Face Jobs.",{"category":33,"check":49,"severity":42,"summary":50},"Tool naming","This is a skill-based extension, not a tool-based one, so tool naming conventions are not applicable.",{"category":33,"check":52,"severity":42,"summary":53},"Minimal I/O surface","As a skill, it doesn't expose explicit tools with parameter schemas; its interface is managed through the agent's interaction with the SKILL.md instructions.",{"category":55,"check":56,"severity":24,"summary":57},"License","License usability","The extension is licensed under the Apache-2.0 license, which is a permissive open-source license.",{"category":59,"check":60,"severity":24,"summary":61},"Maintenance","Commit recency","The repository shows recent commits within the last 90 days, indicating active maintenance.",{"category":59,"check":63,"severity":24,"summary":64},"Dependency Management","The provided scripts include PEP 723 headers specifying dependencies with version pins, ensuring reproducible builds.",{"category":66,"check":67,"severity":24,"summary":68},"Security","Secret Management","The skill correctly handles secrets like HF_TOKEN by expecting them through job submission secrets or environment variables, and does not echo them to stdout.",{"category":66,"check":70,"severity":24,"summary":71},"Injection","The skill uses well-defined inputs via `hf_jobs` arguments and script parameters, and does not appear to load or execute untrusted third-party code directly without sandboxing.",{"category":66,"check":73,"severity":24,"summary":74},"Transitive Supply-Chain Grenades","The skill relies on Hugging Face provided infrastructure and well-defined scripts from the TRL library, avoiding runtime downloads or piping to shell.",{"category":66,"check":76,"severity":24,"summary":77},"Sandbox Isolation","Jobs run in isolated Hugging Face environments, and the skill's scripts manage their own temporary directories, preventing filesystem interference outside the job's scope.",{"category":66,"check":79,"severity":24,"summary":80},"Sandbox escape primitives","No evidence of detached processes or retry loops around denied tool calls is present in the provided script examples.",{"category":66,"check":82,"severity":24,"summary":83},"Data Exfiltration","The skill focuses on training models and does not include instructions for reading or submitting confidential data to third parties.",{"category":66,"check":85,"severity":24,"summary":86},"Hidden Text Tricks","The bundled Markdown files appear to be free of hidden steering tricks or obfuscated content.",{"category":66,"check":88,"severity":24,"summary":89},"Opaque code execution","Scripts are provided as plain Python code and dependencies are managed via PEP 723, avoiding obfuscation or runtime code fetching.",{"category":91,"check":92,"severity":24,"summary":93},"Portability","Structural Assumption","The skill's scripts are designed to run within the Hugging Face Jobs environment and do not make assumptions about user-specific project structures.",{"category":95,"check":96,"severity":24,"summary":97},"Trust","Issues Attention","The repository has a healthy ratio of closed to open issues, indicating active maintenance and responsiveness.",{"category":99,"check":100,"severity":24,"summary":101},"Versioning","Release Management","The repository utilizes Git commits and the LICENSE file indicates a formal release, though a formal versioning scheme (e.g., semver in SKILL.md) is not explicitly used for the skill itself.",{"category":103,"check":104,"severity":24,"summary":105},"Code Execution","Validation","Input arguments for scripts are handled via argparse, and Hugging Face Jobs environment variables provide secrets and configuration safely.",{"category":66,"check":107,"severity":24,"summary":108},"Unguarded Destructive Operations","The primary operations involve model training and conversion, which are contained within the isolated job environment and do not involve destructive file operations on the user's system.",{"category":103,"check":110,"severity":24,"summary":111},"Error Handling","The example scripts incorporate try-except blocks, clear error messages, and utilize the `check=True` argument in subprocess calls for robust error handling.",{"category":103,"check":113,"severity":24,"summary":114},"Logging","The example scripts include print statements and use logging for progress updates, with Trackio for detailed monitoring, and Hugging Face Jobs captures all stdout/stderr.",{"category":116,"check":117,"severity":42,"summary":118},"Compliance","GDPR","The skill operates on model training configurations and does not directly handle personal data that would require GDPR scrutiny.",{"category":116,"check":120,"severity":24,"summary":121},"Target market","The skill is globally applicable for training language models and does not have any regional or jurisdictional limitations.",{"category":91,"check":123,"severity":24,"summary":124},"Runtime stability","The scripts are designed for the Hugging Face Jobs environment and use standard Python libraries and PEP 723 for dependencies, ensuring broad portability.",{"category":45,"check":126,"severity":24,"summary":127},"README","The README file is comprehensive, explaining the skill's purpose, usage, installation, and available commands.",{"category":33,"check":129,"severity":42,"summary":130},"Tool surface size","This is a skill, not a collection of tools, and its functionality is accessed through the SKILL.md instructions rather than a discrete list of tools.",{"category":40,"check":132,"severity":42,"summary":133},"Overlapping near-synonym tools","As a skill, it doesn't expose individual tools with potentially overlapping names; its functionality is invoked as a single unit.",{"category":45,"check":135,"severity":24,"summary":136},"Phantom features","All advertised features, including training methods, GGUF conversion, and monitoring, are backed by concrete scripts and instructions within the SKILL.md and associated reference files.",{"category":138,"check":139,"severity":24,"summary":140},"Install","Installation instruction","The README provides clear installation instructions for various agents (Claude Code, Codex, Gemini CLI, Cursor) and includes copy-paste examples.",{"category":142,"check":143,"severity":24,"summary":144},"Errors","Actionable error messages","The example scripts demonstrate providing actionable error messages and hints for common failure modes, such as missing dependencies or incorrect configurations.",{"category":146,"check":147,"severity":24,"summary":148},"Execution","Pinned dependencies","The example scripts utilize PEP 723 headers to declare dependencies with version pins, ensuring reproducible execution environments.",{"category":33,"check":150,"severity":42,"summary":151},"Dry-run preview","Model training and conversion are core functionalities and do not have a direct 'dry-run' equivalent; however, the script includes cost and time estimation.",{"category":153,"check":154,"severity":24,"summary":155},"Protocol","Idempotent retry & timeouts","The `hf_jobs` submission includes a timeout parameter, and the scripts are designed to be self-contained units that do not rely on state between separate job runs.",{"category":116,"check":157,"severity":24,"summary":158},"Telemetry opt-in","Telemetry is handled via Trackio, which can be configured with a specific Space ID or defaults to local tracking, offering opt-in control.",{"category":40,"check":160,"severity":24,"summary":161},"Precise Purpose","The description clearly defines the skill's purpose (training LLMs with TRL/Unsloth on HF Jobs) and provides specific use cases and when-to-use scenarios.",{"category":40,"check":163,"severity":24,"summary":164},"Concise Frontmatter","The SKILL.md frontmatter is concise, self-contained, and effectively summarizes the skill's core capabilities and usage scenarios.",{"category":45,"check":166,"severity":24,"summary":167},"Concise Body","The SKILL.md body is well-structured, providing clear guidance and examples without excessive verbosity, and delegates deeper material to reference files.",{"category":169,"check":170,"severity":24,"summary":171},"Context","Progressive Disclosure","The SKILL.md outlines the workflow and links to detailed reference files for specific topics like training methods, hardware, GGUF conversion, and troubleshooting.",{"category":169,"check":173,"severity":42,"summary":174},"Forked exploration","The skill is focused on execution of defined training tasks, not broad exploration or deep code review, so `context: fork` is not applicable.",{"category":22,"check":176,"severity":24,"summary":177},"Usage examples","The SKILL.md and example scripts provide numerous end-to-end, ready-to-use examples for SFT, DPO, GRPO, and GGUF conversion, covering various scenarios.",{"category":22,"check":179,"severity":24,"summary":180},"Edge cases","The troubleshooting guide addresses common failure modes like OOM errors, dataset format issues, timeouts, and authentication problems with clear recovery steps.",{"category":103,"check":182,"severity":42,"summary":183},"Tool Fallback","The skill relies on Hugging Face Jobs infrastructure and standard Python libraries, not external tools with fallback requirements.",{"category":91,"check":185,"severity":24,"summary":186},"Stack assumptions","The skill clearly states its environment requirements (Python version, system build tools) and dependency management via PEP 723 headers.",{"category":188,"check":189,"severity":24,"summary":190},"Safety","Halt on unexpected state","The example scripts incorporate error handling and validation checks before critical operations, and the Hugging Face Jobs environment is isolated.",{"category":91,"check":192,"severity":24,"summary":193},"Cross-skill coupling","The skill is self-contained and does not implicitly rely on other skills; references to external documentation or helper scripts are explicit.",1778691313346,"This skill enables fine-tuning language and vision models using TRL or Unsloth on Hugging Face Jobs, covering SFT, DPO, GRPO methods, GGUF conversion, hardware selection, cost estimation, and monitoring.",[197,198,199,200,201],"Fine-tune LLMs using TRL or Unsloth","Leverage Hugging Face Jobs infrastructure","Supports SFT, DPO, GRPO, and Reward Modeling","Convert models to GGUF format for local deployment","Includes cost estimation and Trackio monitoring",[203,204,205],"Directly managing Hugging Face infrastructure (handled by `hf-cli`)","Advanced distributed training setup beyond TRL's automatic handling","Modifying the core TRL or Unsloth libraries","3.0.0","4.4.0","Streamline and simplify the process of training and converting LLMs on cloud infrastructure, making advanced ML workflows accessible.","The skill is exceptionally well-documented, covers a complex workflow (LLM training on cloud infrastructure) comprehensively, and provides production-ready examples with excellent error handling and best practices. No critical or warning findings were identified.",99,"Highly comprehensive and production-ready skill for training LLMs on Hugging Face Jobs.",[213,214,215,216,217,218,219,220],"llm","fine-tuning","trl","unsloth","huggingface-jobs","gguf","python","machine-learning","global","verified",[224,225,226,227],"Fine-tune language models on cloud GPUs without local setup","Align models with human preferences using DPO","Convert trained models to GGUF for Ollama or LM Studio","Optimize training for limited GPU memory with Unsloth",{"codeQuality":229,"collectedAt":231,"documentation":232,"maintenance":235,"security":241,"testCoverage":243},{"hasLockfile":230},false,1778691304173,{"descriptionLength":233,"readmeSize":234},623,9821,{"closedIssues90d":236,"forks":237,"hasChangelog":230,"openIssues90d":238,"pushedAt":239,"stars":240},6,663,4,1778593131000,10482,{"hasNpmPackage":230,"license":242,"smitheryVerified":230},"Apache-2.0",{"hasCi":244,"hasTests":230},true,{"updatedAt":246},1778691314030,{"basePath":248,"githubOwner":249,"githubRepo":250,"locale":18,"slug":13,"type":251},"skills/huggingface-llm-trainer","huggingface","skills","skill",{"_creationTime":253,"_id":254,"community":255,"display":256,"identity":261,"parentExtension":264,"providers":265,"relations":280,"tags":282,"workflow":283},1778690773482.486,"k175g1spb5757qt4tnj9cktcn986mshy",{"reviewCount":8},{"description":257,"installMethods":258,"name":260,"sourceUrl":14},"Agent Skills for AI/ML tasks including dataset creation, model training, evaluation, and research paper publishing on Hugging Face Hub",{"claudeCode":259},"huggingface-skills","Hugging Face Skills",{"basePath":262,"githubOwner":249,"githubRepo":250,"locale":18,"slug":250,"type":263},"","plugin",null,{"evaluate":266,"extract":275},{"promptVersionExtension":206,"promptVersionScoring":207,"score":267,"tags":268,"targetMarket":221,"tier":222},98,[249,269,270,271,272,273,274,219],"ai","ml","datasets","models","training","cli",{"commitSha":276,"license":242,"plugin":277},"HEAD",{"mcpCount":8,"provider":278,"skillCount":279},"classify",14,{"repoId":281},"kd72xwt5xnc0ktc4p7smzfcp3986m959",[269,274,271,249,270,272,219,273],{"evaluatedAt":284,"extractAt":285,"updatedAt":284},1778691185872,1778690773482,{"evaluate":287,"extract":289},{"promptVersionExtension":206,"promptVersionScoring":207,"score":210,"tags":288,"targetMarket":221,"tier":222},[213,214,215,216,217,218,219,220],{"commitSha":276},{"parentExtensionId":254,"repoId":281},{"_creationTime":292,"_id":281,"identity":293,"providers":294,"workflow":733},1778689536128.5474,{"githubOwner":249,"githubRepo":250,"sourceUrl":14},{"classify":295,"discover":726,"github":729},{"commitSha":276,"extensions":296},[297,311,318,326,334,342,350,358,366,374,382,390,398,406,414,422,465,473,479,485,502,508,515,556,567,586,592,612,624,648,706],{"basePath":262,"description":257,"displayName":259,"installMethods":298,"rationale":299,"selectedPaths":300,"source":309,"sourceLanguage":18,"type":310},{"claudeCode":12},"marketplace.json at .claude-plugin/marketplace.json",[301,304,306],{"path":302,"priority":303},".claude-plugin/marketplace.json","mandatory",{"path":305,"priority":303},"README.md",{"path":307,"priority":308},"LICENSE","high","rule","marketplace",{"basePath":248,"description":312,"displayName":13,"installMethods":313,"rationale":314,"selectedPaths":315,"source":309,"sourceLanguage":18,"type":263},"Train or fine-tune language models using TRL on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes hardware selection, cost estimation, Trackio monitoring, and Hub persistence.",{"claudeCode":13},"inline plugin source from marketplace.json at skills/huggingface-llm-trainer",[316],{"path":317,"priority":308},"SKILL.md",{"basePath":319,"description":320,"displayName":321,"installMethods":322,"rationale":323,"selectedPaths":324,"source":309,"sourceLanguage":18,"type":263},"skills/huggingface-local-models","Use to select models to run locally with llama.cpp and GGUF on CPU, Mac Metal, CUDA, or ROCm. Covers finding GGUFs, quant selection, running servers, exact GGUF file lookup, conversion, and OpenAI-compatible local serving.","huggingface-local-models",{"claudeCode":321},"inline plugin source from marketplace.json at skills/huggingface-local-models",[325],{"path":317,"priority":308},{"basePath":327,"description":328,"displayName":329,"installMethods":330,"rationale":331,"selectedPaths":332,"source":309,"sourceLanguage":18,"type":263},"skills/huggingface-paper-publisher","Publish and manage research papers on Hugging Face Hub. Supports creating paper pages, linking papers to models/datasets, claiming authorship, and generating professional markdown-based research articles.","huggingface-paper-publisher",{"claudeCode":329},"inline plugin source from marketplace.json at skills/huggingface-paper-publisher",[333],{"path":317,"priority":308},{"basePath":335,"description":336,"displayName":337,"installMethods":338,"rationale":339,"selectedPaths":340,"source":309,"sourceLanguage":18,"type":263},"skills/huggingface-papers","Look up and read Hugging Face paper pages in markdown, and use the papers API for structured metadata like authors, linked models, datasets, Spaces, and media URLs when needed.","huggingface-papers",{"claudeCode":337},"inline plugin source from marketplace.json at skills/huggingface-papers",[341],{"path":317,"priority":308},{"basePath":343,"description":344,"displayName":345,"installMethods":346,"rationale":347,"selectedPaths":348,"source":309,"sourceLanguage":18,"type":263},"skills/huggingface-community-evals","Add and manage evaluation results in Hugging Face model cards. Supports extracting eval tables from README content, importing scores from Artificial Analysis API, and running custom evaluations with vLLM/lighteval.","huggingface-community-evals",{"claudeCode":345},"inline plugin source from marketplace.json at skills/huggingface-community-evals",[349],{"path":317,"priority":308},{"basePath":351,"description":352,"displayName":353,"installMethods":354,"rationale":355,"selectedPaths":356,"source":309,"sourceLanguage":18,"type":263},"skills/huggingface-best","Find the best AI model for any task by querying Hugging Face leaderboards and benchmarks. Recommends top models based on task type, hardware constraints, and benchmark scores.","huggingface-best",{"claudeCode":353},"inline plugin source from marketplace.json at skills/huggingface-best",[357],{"path":317,"priority":308},{"basePath":359,"description":360,"displayName":361,"installMethods":362,"rationale":363,"selectedPaths":364,"source":309,"sourceLanguage":18,"type":263},"skills/hf-cli","Execute Hugging Face Hub operations using the hf CLI. Download models/datasets, upload files, manage repos, and run cloud compute jobs.","hf-cli",{"claudeCode":361},"inline plugin source from marketplace.json at skills/hf-cli",[365],{"path":317,"priority":308},{"basePath":367,"description":368,"displayName":369,"installMethods":370,"rationale":371,"selectedPaths":372,"source":309,"sourceLanguage":18,"type":263},"skills/huggingface-trackio","Track and visualize ML training experiments with Trackio. Log metrics via Python API and retrieve them via CLI. Supports real-time dashboards synced to HF Spaces.","huggingface-trackio",{"claudeCode":369},"inline plugin source from marketplace.json at skills/huggingface-trackio",[373],{"path":317,"priority":308},{"basePath":375,"description":376,"displayName":377,"installMethods":378,"rationale":379,"selectedPaths":380,"source":309,"sourceLanguage":18,"type":263},"skills/huggingface-datasets","Explore, query, and extract data from any Hugging Face dataset using the Dataset Viewer REST API and npx tooling. Zero Python dependencies — covers split/config discovery, row pagination, text search, filtering, SQL via parquetlens, and dataset upload via CLI.","huggingface-datasets",{"claudeCode":377},"inline plugin source from marketplace.json at skills/huggingface-datasets",[381],{"path":317,"priority":308},{"basePath":383,"description":384,"displayName":385,"installMethods":386,"rationale":387,"selectedPaths":388,"source":309,"sourceLanguage":18,"type":263},"skills/huggingface-tool-builder","Build reusable scripts for Hugging Face Hub and API workflows. Useful for chaining API calls, enriching Hub metadata, or automating repeated tasks.","huggingface-tool-builder",{"claudeCode":385},"inline plugin source from marketplace.json at skills/huggingface-tool-builder",[389],{"path":317,"priority":308},{"basePath":391,"description":392,"displayName":393,"installMethods":394,"rationale":395,"selectedPaths":396,"source":309,"sourceLanguage":18,"type":263},"skills/huggingface-gradio","Build Gradio web UIs and demos in Python. Use when creating or editing Gradio apps, components, event listeners, layouts, or chatbots.","huggingface-gradio",{"claudeCode":393},"inline plugin source from marketplace.json at skills/huggingface-gradio",[397],{"path":317,"priority":308},{"basePath":399,"description":400,"displayName":401,"installMethods":402,"rationale":403,"selectedPaths":404,"source":309,"sourceLanguage":18,"type":263},"skills/transformers-js","Run state-of-the-art machine learning models directly in JavaScript/TypeScript for NLP, computer vision, audio processing, and multimodal tasks. Works in Node.js and browsers with WebGPU/WASM using Hugging Face models.","transformers-js",{"claudeCode":401},"inline plugin source from marketplace.json at skills/transformers-js",[405],{"path":317,"priority":308},{"basePath":407,"description":408,"displayName":409,"installMethods":410,"rationale":411,"selectedPaths":412,"source":309,"sourceLanguage":18,"type":263},"skills/huggingface-vision-trainer","Train and fine-tune object detection models (RTDETRv2, YOLOS, DETR and others) and image classification models (timm and transformers models — MobileNetV3, MobileViT, ResNet, ViT/DINOv3) using Transformers Trainer API on Hugging Face Jobs infrastructure or locally. Includes COCO dataset format support, Albumentations augmentation, mAP/mAR metrics, trackio tracking, hardware selection, and Hub persistence.","huggingface-vision-trainer",{"claudeCode":409},"inline plugin source from marketplace.json at skills/huggingface-vision-trainer",[413],{"path":317,"priority":308},{"basePath":415,"description":416,"displayName":417,"installMethods":418,"rationale":419,"selectedPaths":420,"source":309,"sourceLanguage":18,"type":263},"skills/train-sentence-transformers","Train or fine-tune sentence-transformers models across all three architectures: SentenceTransformer (bi-encoder embeddings), CrossEncoder (rerankers), and SparseEncoder (SPLADE). Covers loss selection, hard-negative mining, evaluators, distillation, LoRA, Matryoshka, and Hugging Face Hub publishing.","train-sentence-transformers",{"claudeCode":417},"inline plugin source from marketplace.json at skills/train-sentence-transformers",[421],{"path":317,"priority":308},{"basePath":262,"description":257,"displayName":259,"installMethods":423,"license":242,"rationale":424,"selectedPaths":425,"source":309,"sourceLanguage":18,"type":263},{"claudeCode":259},"plugin manifest at .claude-plugin/plugin.json",[426,428,429,430,433,435,437,439,441,443,445,447,449,451,453,455,457,459,461,463],{"path":427,"priority":303},".claude-plugin/plugin.json",{"path":305,"priority":303},{"path":307,"priority":308},{"path":431,"priority":432},"skills/hf-cli/SKILL.md","medium",{"path":434,"priority":432},"skills/huggingface-best/SKILL.md",{"path":436,"priority":432},"skills/huggingface-community-evals/SKILL.md",{"path":438,"priority":432},"skills/huggingface-datasets/SKILL.md",{"path":440,"priority":432},"skills/huggingface-gradio/SKILL.md",{"path":442,"priority":432},"skills/huggingface-llm-trainer/SKILL.md",{"path":444,"priority":432},"skills/huggingface-local-models/SKILL.md",{"path":446,"priority":432},"skills/huggingface-paper-publisher/SKILL.md",{"path":448,"priority":432},"skills/huggingface-papers/SKILL.md",{"path":450,"priority":432},"skills/huggingface-tool-builder/SKILL.md",{"path":452,"priority":432},"skills/huggingface-trackio/SKILL.md",{"path":454,"priority":432},"skills/huggingface-vision-trainer/SKILL.md",{"path":456,"priority":432},"skills/train-sentence-transformers/SKILL.md",{"path":458,"priority":432},"skills/transformers-js/SKILL.md",{"path":460,"priority":303},".mcp.json",{"path":462,"priority":308},"agents/AGENTS.md",{"path":464,"priority":308},".cursor-plugin/plugin.json",{"basePath":466,"description":467,"displayName":468,"installMethods":469,"rationale":470,"selectedPaths":471,"source":309,"sourceLanguage":18,"type":251},"hf-mcp/skills/hf-mcp","Use Hugging Face Hub via MCP server tools. Search models, datasets, Spaces, papers. Get repo details, fetch documentation, run compute jobs, and use Gradio Spaces as AI tools. Available when connected to the HF MCP server.","hf-mcp",{"claudeCode":12},"SKILL.md frontmatter at hf-mcp/skills/hf-mcp/SKILL.md",[472],{"path":317,"priority":303},{"basePath":359,"description":474,"displayName":361,"installMethods":475,"rationale":476,"selectedPaths":477,"source":309,"sourceLanguage":18,"type":251},"Hugging Face Hub CLI (`hf`) for downloading, uploading, and managing models, datasets, spaces, buckets, repos, papers, jobs, and more on the Hugging Face Hub. Use when: handling authentication; managing local cache; managing Hugging Face Buckets; running or scheduling jobs on Hugging Face infrastructure; managing Hugging Face repos; discussions and pull requests; browsing models, datasets and spaces; reading, searching, or browsing academic papers; managing collections; querying datasets; configuring spaces; setting up webhooks; or deploying and managing HF Inference Endpoints. Make sure to use this skill whenever the user mentions 'hf', 'huggingface', 'Hugging Face', 'huggingface-cli', or 'hugging face cli', or wants to do anything related to the Hugging Face ecosystem and to AI and ML in general. Also use for cloud storage needs like training checkpoints, data pipelines, or agent traces. Use even if the user doesn't explicitly ask for a CLI command. Replaces the deprecated `huggingface-cli`.",{"claudeCode":12},"SKILL.md frontmatter at skills/hf-cli/SKILL.md",[478],{"path":317,"priority":303},{"basePath":351,"description":480,"displayName":353,"installMethods":481,"rationale":482,"selectedPaths":483,"source":309,"sourceLanguage":18,"type":251},"Use when the user asks about finding the best, top, or recommended model for a task, wants to know what AI model to use, or wants to compare models by benchmark scores. Triggers on: \"best model for X\", \"what model should I use for\", \"top models for [task]\", \"which model runs on my laptop/machine/device\", \"recommend a model for\", \"what LLM should I use for\", \"compare models for\", \"what's state of the art for\", or any question about choosing an AI model for a specific use case. Always use this skill when the user wants model recommendations or comparisons, even if they don't explicitly mention HuggingFace or benchmarks.\n",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-best/SKILL.md",[484],{"path":317,"priority":303},{"basePath":343,"description":486,"displayName":345,"installMethods":487,"rationale":488,"selectedPaths":489,"source":309,"sourceLanguage":18,"type":251},"Run evaluations for Hugging Face Hub models using inspect-ai and lighteval on local hardware. Use for backend selection, local GPU evals, and choosing between vLLM / Transformers / accelerate. Not for HF Jobs orchestration, model-card PRs, .eval_results publication, or community-evals automation.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-community-evals/SKILL.md",[490,491,494,496,498,500],{"path":317,"priority":303},{"path":492,"priority":493},"examples/.env.example","low",{"path":495,"priority":493},"examples/USAGE_EXAMPLES.md",{"path":497,"priority":493},"scripts/inspect_eval_uv.py",{"path":499,"priority":493},"scripts/inspect_vllm_uv.py",{"path":501,"priority":493},"scripts/lighteval_vllm_uv.py",{"basePath":375,"description":503,"displayName":377,"installMethods":504,"rationale":505,"selectedPaths":506,"source":309,"sourceLanguage":18,"type":251},"Use this skill for Hugging Face Dataset Viewer API workflows that fetch subset/split metadata, paginate rows, search text, apply filters, download parquet URLs, and read size or statistics.\r",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-datasets/SKILL.md",[507],{"path":317,"priority":303},{"basePath":391,"description":392,"displayName":393,"installMethods":509,"rationale":510,"selectedPaths":511,"source":309,"sourceLanguage":18,"type":251},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-gradio/SKILL.md",[512,513],{"path":317,"priority":303},{"path":514,"priority":432},"examples.md",{"basePath":248,"description":10,"displayName":13,"installMethods":516,"rationale":517,"selectedPaths":518,"source":309,"sourceLanguage":18,"type":251},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-llm-trainer/SKILL.md",[519,520,522,524,526,528,530,532,534,536,538,540,542,544,546,548,550,552,554],{"path":317,"priority":303},{"path":521,"priority":432},"references/gguf_conversion.md",{"path":523,"priority":432},"references/hardware_guide.md",{"path":525,"priority":432},"references/hub_saving.md",{"path":527,"priority":432},"references/local_training_macos.md",{"path":529,"priority":432},"references/reliability_principles.md",{"path":531,"priority":432},"references/trackio_guide.md",{"path":533,"priority":432},"references/training_methods.md",{"path":535,"priority":432},"references/training_patterns.md",{"path":537,"priority":432},"references/troubleshooting.md",{"path":539,"priority":432},"references/unsloth.md",{"path":541,"priority":493},"scripts/convert_to_gguf.py",{"path":543,"priority":493},"scripts/dataset_inspector.py",{"path":545,"priority":493},"scripts/estimate_cost.py",{"path":547,"priority":493},"scripts/hf_benchmarks.py",{"path":549,"priority":493},"scripts/train_dpo_example.py",{"path":551,"priority":493},"scripts/train_grpo_example.py",{"path":553,"priority":493},"scripts/train_sft_example.py",{"path":555,"priority":493},"scripts/unsloth_sft_example.py",{"basePath":319,"description":320,"displayName":321,"installMethods":557,"rationale":558,"selectedPaths":559,"source":309,"sourceLanguage":18,"type":251},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-local-models/SKILL.md",[560,561,563,565],{"path":317,"priority":303},{"path":562,"priority":432},"references/hardware.md",{"path":564,"priority":432},"references/hub-discovery.md",{"path":566,"priority":432},"references/quantization.md",{"basePath":327,"description":328,"displayName":329,"installMethods":568,"rationale":569,"selectedPaths":570,"source":309,"sourceLanguage":18,"type":251},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-paper-publisher/SKILL.md",[571,572,574,576,578,580,582,584],{"path":317,"priority":303},{"path":573,"priority":493},"examples/example_usage.md",{"path":575,"priority":432},"references/quick_reference.md",{"path":577,"priority":493},"scripts/paper_manager.py",{"path":579,"priority":493},"templates/arxiv.md",{"path":581,"priority":493},"templates/ml-report.md",{"path":583,"priority":493},"templates/modern.md",{"path":585,"priority":493},"templates/standard.md",{"basePath":335,"description":587,"displayName":337,"installMethods":588,"rationale":589,"selectedPaths":590,"source":309,"sourceLanguage":18,"type":251},"Look up and read Hugging Face paper pages in markdown, and use the papers API for structured metadata such as authors, linked models/datasets/spaces, Github repo and project page. Use when the user shares a Hugging Face paper page URL, an arXiv URL or ID, or asks to summarize, explain, or analyze an AI research paper.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-papers/SKILL.md",[591],{"path":317,"priority":303},{"basePath":383,"description":593,"displayName":385,"installMethods":594,"rationale":595,"selectedPaths":596,"source":309,"sourceLanguage":18,"type":251},"Use this skill when the user wants to build tool/scripts or achieve a task where using data from the Hugging Face API would help. This is especially useful when chaining or combining API calls or the task will be repeated/automated. This Skill creates a reusable script to fetch, enrich or process data.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-tool-builder/SKILL.md",[597,598,600,602,604,606,608,610],{"path":317,"priority":303},{"path":599,"priority":432},"references/baseline_hf_api.py",{"path":601,"priority":432},"references/baseline_hf_api.sh",{"path":603,"priority":432},"references/baseline_hf_api.tsx",{"path":605,"priority":432},"references/find_models_by_paper.sh",{"path":607,"priority":432},"references/hf_enrich_models.sh",{"path":609,"priority":432},"references/hf_model_card_frontmatter.sh",{"path":611,"priority":432},"references/hf_model_papers_auth.sh",{"basePath":367,"description":613,"displayName":369,"installMethods":614,"rationale":615,"selectedPaths":616,"source":309,"sourceLanguage":18,"type":251},"Track and visualize ML training experiments with Trackio. Use when logging metrics during training (Python API), firing alerts for training diagnostics, or retrieving/analyzing logged metrics (CLI). Supports real-time dashboard visualization, alerts with webhooks, HF Space syncing, and JSON output for automation.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-trackio/SKILL.md",[617,618,620,622],{"path":317,"priority":303},{"path":619,"priority":432},"references/alerts.md",{"path":621,"priority":432},"references/logging_metrics.md",{"path":623,"priority":432},"references/retrieving_metrics.md",{"basePath":407,"description":625,"displayName":409,"installMethods":626,"rationale":627,"selectedPaths":628,"source":309,"sourceLanguage":18,"type":251},"Trains and fine-tunes vision models for object detection (D-FINE, RT-DETR v2, DETR, YOLOS), image classification (timm models — MobileNetV3, MobileViT, ResNet, ViT/DINOv3 — plus any Transformers classifier), and SAM/SAM2 segmentation using Hugging Face Transformers on Hugging Face Jobs cloud GPUs. Covers COCO-format dataset preparation, Albumentations augmentation, mAP/mAR evaluation, accuracy metrics, SAM segmentation with bbox/point prompts, DiceCE loss, hardware selection, cost estimation, Trackio monitoring, and Hub persistence. Use when users mention training object detection, image classification, SAM, SAM2, segmentation, image matting, DETR, D-FINE, RT-DETR, ViT, timm, MobileNet, ResNet, bounding box models, or fine-tuning vision models on Hugging Face Jobs.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-vision-trainer/SKILL.md",[629,630,632,633,635,637,638,640,641,642,644,646],{"path":317,"priority":303},{"path":631,"priority":432},"references/finetune_sam2_trainer.md",{"path":525,"priority":432},{"path":634,"priority":432},"references/image_classification_training_notebook.md",{"path":636,"priority":432},"references/object_detection_training_notebook.md",{"path":529,"priority":432},{"path":639,"priority":432},"references/timm_trainer.md",{"path":543,"priority":493},{"path":545,"priority":493},{"path":643,"priority":493},"scripts/image_classification_training.py",{"path":645,"priority":493},"scripts/object_detection_training.py",{"path":647,"priority":493},"scripts/sam_segmentation_training.py",{"basePath":415,"description":649,"displayName":417,"installMethods":650,"rationale":651,"selectedPaths":652,"source":309,"sourceLanguage":18,"type":251},"Train or fine-tune sentence-transformers models across `SentenceTransformer` (bi-encoder; dense or static embedding model; for retrieval, similarity, clustering, classification, paraphrase mining, dedup, multimodal), `CrossEncoder` (reranker; pair scoring for two-stage retrieval / pair classification), and `SparseEncoder` (SPLADE, sparse embedding model; for learned-sparse retrieval). Covers loss selection, hard-negative mining, evaluators, distillation, LoRA, Matryoshka, and Hugging Face Hub publishing. Use for any sentence-transformers training task.",{"claudeCode":12},"SKILL.md frontmatter at skills/train-sentence-transformers/SKILL.md",[653,654,656,658,660,662,664,665,667,669,671,673,675,677,679,680,682,684,686,688,690,692,694,696,698,700,702,704],{"path":317,"priority":303},{"path":655,"priority":432},"references/base_model_selection.md",{"path":657,"priority":432},"references/dataset_formats.md",{"path":659,"priority":432},"references/evaluators_cross_encoder.md",{"path":661,"priority":432},"references/evaluators_sentence_transformer.md",{"path":663,"priority":432},"references/evaluators_sparse_encoder.md",{"path":523,"priority":432},{"path":666,"priority":432},"references/hf_jobs_execution.md",{"path":668,"priority":432},"references/losses_cross_encoder.md",{"path":670,"priority":432},"references/losses_sentence_transformer.md",{"path":672,"priority":432},"references/losses_sparse_encoder.md",{"path":674,"priority":432},"references/model_architectures.md",{"path":676,"priority":432},"references/prompts_and_instructions.md",{"path":678,"priority":432},"references/training_args.md",{"path":537,"priority":432},{"path":681,"priority":493},"scripts/mine_hard_negatives.py",{"path":683,"priority":493},"scripts/train_cross_encoder_distillation_example.py",{"path":685,"priority":493},"scripts/train_cross_encoder_example.py",{"path":687,"priority":493},"scripts/train_cross_encoder_listwise_example.py",{"path":689,"priority":493},"scripts/train_sentence_transformer_distillation_example.py",{"path":691,"priority":493},"scripts/train_sentence_transformer_example.py",{"path":693,"priority":493},"scripts/train_sentence_transformer_make_multilingual_example.py",{"path":695,"priority":493},"scripts/train_sentence_transformer_matryoshka_example.py",{"path":697,"priority":493},"scripts/train_sentence_transformer_multi_dataset_example.py",{"path":699,"priority":493},"scripts/train_sentence_transformer_static_embedding_example.py",{"path":701,"priority":493},"scripts/train_sentence_transformer_with_lora_example.py",{"path":703,"priority":493},"scripts/train_sparse_encoder_distillation_example.py",{"path":705,"priority":493},"scripts/train_sparse_encoder_example.py",{"basePath":399,"description":707,"displayName":401,"installMethods":708,"rationale":709,"selectedPaths":710,"source":309,"sourceLanguage":18,"type":251},"Use Transformers.js to run state-of-the-art machine learning models directly in JavaScript/TypeScript. Supports NLP (text classification, translation, summarization), computer vision (image classification, object detection), audio (speech recognition, audio classification), and multimodal tasks. Works in browsers and server-side runtimes (Node.js, Bun, Deno) with WebGPU/WASM using pre-trained models from Hugging Face Hub.",{"claudeCode":12},"SKILL.md frontmatter at skills/transformers-js/SKILL.md",[711,712,714,716,718,720,722,724],{"path":317,"priority":303},{"path":713,"priority":432},"references/CACHE.md",{"path":715,"priority":432},"references/CONFIGURATION.md",{"path":717,"priority":432},"references/EXAMPLES.md",{"path":719,"priority":432},"references/MODEL_ARCHITECTURES.md",{"path":721,"priority":432},"references/MODEL_REGISTRY.md",{"path":723,"priority":432},"references/PIPELINE_OPTIONS.md",{"path":725,"priority":432},"references/TEXT_GENERATION.md",{"sources":727},[728],"manual",{"closedIssues90d":236,"description":730,"forks":237,"homepage":731,"license":242,"openIssues90d":238,"pushedAt":239,"readmeSize":234,"stars":240,"topics":732},"Give your agents the power of the Hugging Face ecosystem","https://huggingface.co",[],{"classifiedAt":734,"discoverAt":735,"extractAt":736,"githubAt":736,"updatedAt":734},1778690772996,1778689536128,1778690770714,[214,218,217,213,220,219,215,216],{"evaluatedAt":246,"extractAt":285,"updatedAt":246},[],[741,768,791,822,847,876],{"_creationTime":742,"_id":743,"community":744,"display":745,"identity":750,"providers":755,"relations":762,"tags":764,"workflow":765},1778685991755.712,"k17b897sehkbe32k8dnrdd0wfh86nraz",{"reviewCount":8},{"description":746,"installMethods":747,"name":216,"sourceUrl":749},"Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization",{"claudeCode":748},"davila7/claude-code-templates","https://github.com/davila7/claude-code-templates",{"basePath":751,"githubOwner":752,"githubRepo":753,"locale":18,"slug":754,"type":251},"cli-tool/components/skills/ai-research/fine-tuning-unsloth","davila7","claude-code-templates","fine-tuning-unsloth",{"evaluate":756,"extract":761},{"promptVersionExtension":206,"promptVersionScoring":207,"score":757,"tags":758,"targetMarket":221,"tier":222},100,[214,216,213,759,760],"optimization","documentation",{"commitSha":276},{"repoId":763},"kd71fzn4s7r0269fkw47wt670n86ndz0",[760,214,213,759,216],{"evaluatedAt":766,"extractAt":767,"updatedAt":766},1778687220844,1778685991755,{"_creationTime":769,"_id":770,"community":771,"display":772,"identity":776,"providers":779,"relations":787,"tags":788,"workflow":789},1778685991755.7185,"k1729f87kej6wpzyz3hdtvqqrx86mwen",{"reviewCount":8},{"description":773,"installMethods":774,"name":775,"sourceUrl":749},"Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.",{"claudeCode":748},"implementing-llms-litgpt",{"basePath":777,"githubOwner":752,"githubRepo":753,"locale":18,"slug":778,"type":251},"cli-tool/components/skills/ai-research/model-architecture-litgpt","model-architecture-litgpt",{"evaluate":780,"extract":786},{"promptVersionExtension":206,"promptVersionScoring":207,"score":757,"tags":781,"targetMarket":221,"tier":222},[213,782,783,784,214,785],"litgpt","lightning-ai","model-training","pytorch",{"commitSha":276},{"repoId":763},[214,783,782,213,784,785],{"evaluatedAt":790,"extractAt":767,"updatedAt":790},1778687846785,{"_creationTime":792,"_id":793,"community":794,"display":795,"identity":801,"providers":806,"relations":816,"tags":818,"workflow":819},1778691799740.4976,"k1719vgzsxtv8exr684y5ww47s86mzqh",{"reviewCount":8},{"description":796,"installMethods":797,"name":799,"sourceUrl":800},"Zero-shot time series forecasting with Google's TimesFM foundation model. Use for any univariate time series (sales, sensors, energy, vitals, weather) without training a custom model. Supports CSV/DataFrame/array inputs with point forecasts and prediction intervals. Includes a preflight system checker script to verify RAM/GPU before first use.",{"claudeCode":798},"K-Dense-AI/claude-scientific-skills","TimesFM Forecasting","https://github.com/K-Dense-AI/claude-scientific-skills",{"basePath":802,"githubOwner":803,"githubRepo":804,"locale":18,"slug":805,"type":251},"scientific-skills/timesfm-forecasting","K-Dense-AI","claude-scientific-skills","timesfm-forecasting",{"evaluate":807,"extract":814},{"promptVersionExtension":206,"promptVersionScoring":207,"score":757,"tags":808,"targetMarket":221,"tier":222},[809,810,811,812,813,220,219],"time-series","forecasting","univariate","foundation-model","timesfm",{"commitSha":276,"license":815},"MIT",{"repoId":817},"kd79rphh5gexy91xmpxc05h5mh86mm9r",[810,812,220,219,809,813,811],{"evaluatedAt":820,"extractAt":821,"updatedAt":820},1778694590335,1778691799740,{"_creationTime":823,"_id":824,"community":825,"display":826,"identity":830,"providers":834,"relations":840,"tags":843,"workflow":844},1778695116697.1785,"k17fxgg9ccq2rfqdb0x6fh4q3186nfdg",{"reviewCount":8},{"description":746,"installMethods":827,"name":216,"sourceUrl":829},{"claudeCode":828},"Orchestra-Research/AI-Research-SKILLs","https://github.com/Orchestra-Research/AI-Research-SKILLs",{"basePath":831,"githubOwner":832,"githubRepo":833,"locale":18,"slug":216,"type":251},"03-fine-tuning/unsloth","Orchestra-Research","AI-Research-SKILLs",{"evaluate":835,"extract":839},{"promptVersionExtension":206,"promptVersionScoring":207,"score":267,"tags":836,"targetMarket":221,"tier":222},[214,216,837,838,759,213],"lora","qlora",{"commitSha":276},{"parentExtensionId":841,"repoId":842},"k17155ws9qc0hw7a568bg79sfd86max8","kd70hj1y80mhra5xm5g188j5n586mg18",[214,213,837,759,838,216],{"evaluatedAt":845,"extractAt":846,"updatedAt":845},1778695824608,1778695116697,{"_creationTime":848,"_id":849,"community":850,"display":851,"identity":855,"providers":858,"relations":872,"tags":873,"workflow":874},1778695116697.1816,"k1765mxtemzz43015pwfy17tfs86n8hx",{"reviewCount":8},{"description":852,"installMethods":853,"name":854,"sourceUrl":829},"Fine-tune LLMs using reinforcement learning with TRL - SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, and reward model training. Use when need RLHF, align model with preferences, or train from human feedback. Works with HuggingFace Transformers.",{"claudeCode":828},"fine-tuning-with-trl",{"basePath":856,"githubOwner":832,"githubRepo":833,"locale":18,"slug":857,"type":251},"06-post-training/trl-fine-tuning","trl-fine-tuning",{"evaluate":859,"extract":871},{"promptVersionExtension":206,"promptVersionScoring":207,"score":860,"tags":861,"targetMarket":221,"tier":222},96,[862,215,863,214,864,865,866,867,868,869,870],"post-training","reinforcement-learning","sft","dpo","ppo","grpo","rlhf","preference-alignment","huggingface-transformers",{"commitSha":276},{"parentExtensionId":841,"repoId":842},[865,214,867,870,862,866,869,863,868,864,215],{"evaluatedAt":875,"extractAt":846,"updatedAt":875},1778696138274,{"_creationTime":877,"_id":878,"community":879,"display":880,"identity":886,"providers":891,"relations":902,"tags":905,"workflow":906},1778696691708.3308,"k17d3c35ws96bb55ry97apwm5n86mqp2",{"reviewCount":8},{"description":881,"installMethods":882,"name":884,"sourceUrl":885},"Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval",{"claudeCode":883},"ruvnet/ruflo","Chat Format","https://github.com/ruvnet/ruflo",{"basePath":887,"githubOwner":888,"githubRepo":889,"locale":18,"slug":890,"type":251},"plugins/ruflo-ruvllm/skills/chat-format","ruvnet","ruflo","chat-format",{"evaluate":892,"extract":901},{"promptVersionExtension":206,"promptVersionScoring":207,"score":757,"tags":893,"targetMarket":221,"tier":222},[213,894,895,896,897,898,899,900],"prompting","rag","context-retrieval","openai","anthropic","gemini","ollama",{"commitSha":276,"license":815},{"parentExtensionId":903,"repoId":904},"k17ekc0sj70ms9kgkkgr2ypr4s86mz40","kd7ed28gj8n0y3msk5dzrp05zs86nqtc",[898,896,899,213,900,897,894,895],{"evaluatedAt":907,"extractAt":908,"updatedAt":907},1778701390930,1778696691708]