[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"extension-skill-huggingface-huggingface-best-en":3,"guides-for-huggingface-huggingface-best":747,"similar-k1762e6s5rwd7spcymdvpb9rtn86m0jm-en":748},{"_creationTime":4,"_id":5,"children":6,"community":7,"display":9,"evaluation":15,"identity":254,"isFallback":237,"parentExtension":259,"providers":294,"relations":298,"repo":299,"tags":745,"workflow":746},1778690773482.4868,"k1762e6s5rwd7spcymdvpb9rtn86m0jm",[],{"reviewCount":8},0,{"description":10,"installMethods":11,"name":13,"sourceUrl":14},"Use when the user asks about finding the best, top, or recommended model for a task, wants to know what AI model to use, or wants to compare models by benchmark scores. Triggers on: \"best model for X\", \"what model should I use for\", \"top models for [task]\", \"which model runs on my laptop/machine/device\", \"recommend a model for\", \"what LLM should I use for\", \"compare models for\", \"what's state of the art for\", or any question about choosing an AI model for a specific use case. Always use this skill when the user wants model recommendations or comparisons, even if they don't explicitly mention HuggingFace or benchmarks.\n",{"claudeCode":12},"huggingface/skills","HuggingFace Best Model Finder","https://github.com/huggingface/skills",{"_creationTime":16,"_id":17,"extensionId":5,"locale":18,"result":19,"trustSignals":235,"workflow":252},1778691241235.032,"kn7fpa01namgc75m74z1sqdbfh86n297","en",{"checks":20,"evaluatedAt":192,"extensionSummary":193,"features":194,"nonGoals":200,"practices":205,"prerequisites":206,"promptVersionExtension":207,"promptVersionScoring":208,"purpose":209,"rationale":210,"score":211,"summary":212,"tags":213,"targetMarket":220,"tier":221,"useCases":222,"workflow":227},[21,26,29,32,36,39,44,48,51,54,58,62,65,69,72,75,78,81,84,87,91,95,99,103,107,110,114,117,121,124,127,130,133,136,139,143,147,150,153,157,160,163,166,169,173,176,179,182,185,189],{"category":22,"check":23,"severity":24,"summary":25},"Practical Utility","Problem relevance","pass","The description clearly states the problem of finding the best AI model for a task and provides specific trigger phrases and use cases.",{"category":22,"check":27,"severity":24,"summary":28},"Unique selling proposition","The skill goes beyond default LLM behavior by actively querying leaderboards, enriching data with model size, filtering by device, and presenting a comparison table, offering significant value over a simple prompt.",{"category":22,"check":30,"severity":24,"summary":31},"Production readiness","The skill appears production-ready, covering the complete lifecycle of model recommendation from parsing requests to providing setup instructions and follow-up questions.",{"category":33,"check":34,"severity":24,"summary":35},"Scope","Single responsibility principle","The skill focuses on a single, coherent domain: recommending AI models based on tasks and device constraints.",{"category":33,"check":37,"severity":24,"summary":38},"Description quality","The displayed description accurately reflects the skill's functionality and is concise and readable.",{"category":40,"check":41,"severity":42,"summary":43},"Invocation","Scoped tools","not_applicable","This skill does not expose explicit tools; it operates as a single unit of logic within the agent.",{"category":45,"check":46,"severity":42,"summary":47},"Documentation","Configuration & parameter reference","The skill does not appear to have configurable parameters or environment variables beyond what's described in the SKILL.md, which serves as its documentation.",{"category":33,"check":49,"severity":42,"summary":50},"Tool naming","This skill does not expose tools with user-facing names; it operates as a single skill unit.",{"category":33,"check":52,"severity":42,"summary":53},"Minimal I/O surface","As a skill without explicitly defined tools, this check is not applicable.",{"category":55,"check":56,"severity":24,"summary":57},"License","License usability","The bundled LICENSE file is the Apache-2.0 license, a recognized permissive open-source license.",{"category":59,"check":60,"severity":24,"summary":61},"Maintenance","Commit recency","The last commit was on 2026-05-12, which is within the last 3 months.",{"category":59,"check":63,"severity":24,"summary":64},"Dependency Management","The skill uses external APIs (`curl`) but does not appear to have direct 3rd party code dependencies that require explicit management.",{"category":66,"check":67,"severity":24,"summary":68},"Security","Secret Management","The skill uses a Hugging Face token but correctly references it via a file path (`~/.cache/huggingface/token`) and does not echo it to stdout.",{"category":66,"check":70,"severity":24,"summary":71},"Injection","The skill's instructions are clear and do not rely on external data as executable code. It treats API responses as data.",{"category":66,"check":73,"severity":24,"summary":74},"Transitive Supply-Chain Grenades","The skill uses `curl` to fetch data from APIs, but this is a standard, well-defined operation and not a runtime download of arbitrary code or instructions.",{"category":66,"check":76,"severity":24,"summary":77},"Sandbox Isolation","The skill operates by making API calls and processing data, with no indication of attempting to modify files outside its operational scope.",{"category":66,"check":79,"severity":24,"summary":80},"Sandbox escape primitives","No evidence of detached process spawns or deny-retry loops is present in the skill's instructions.",{"category":66,"check":82,"severity":24,"summary":83},"Data Exfiltration","The skill's outbound calls are documented and limited to fetching data from Hugging Face APIs; there are no instructions to exfiltrate confidential data.",{"category":66,"check":85,"severity":24,"summary":86},"Hidden Text Tricks","The bundled content is free of hidden-steering tricks. Descriptions and instructions use clean, printable ASCII.",{"category":88,"check":89,"severity":24,"summary":90},"Hooks","Opaque code execution","The skill's instructions are plain text and do not involve obfuscated code, base64 payloads, or runtime script downloads.",{"category":92,"check":93,"severity":24,"summary":94},"Portability","Structural Assumption","The skill assumes the presence of the Hugging Face token file and API endpoints but does not make assumptions about the user's project file structure.",{"category":96,"check":97,"severity":24,"summary":98},"Trust","Issues Attention","There were 4 issues opened and 6 closed in the last 90 days, indicating active maintenance and responsiveness.",{"category":100,"check":101,"severity":24,"summary":102},"Versioning","Release Management","The repository has a meaningful commit date and the structure implies versioning through Git history, although no explicit version number is present.",{"category":104,"check":105,"severity":42,"summary":106},"Execution","Validation","The skill does not expose structured inputs or outputs that would require schema validation libraries; its logic is contained within agent instructions.",{"category":66,"check":108,"severity":24,"summary":109},"Unguarded Destructive Operations","The skill is read-only in its core function (querying APIs) and does not perform any destructive operations.",{"category":111,"check":112,"severity":24,"summary":113},"Code Execution","Error Handling","The SKILL.md outlines specific error handling steps for API failures and missing data, guiding the agent on how to respond.",{"category":111,"check":115,"severity":42,"summary":116},"Logging","The skill does not perform destructive actions or outbound calls that require a dedicated audit log.",{"category":118,"check":119,"severity":24,"summary":120},"Compliance","GDPR","The skill operates on general task descriptions and model metadata, not personal user data.",{"category":118,"check":122,"severity":24,"summary":123},"Target market","The skill is globally applicable, recommending models based on task and hardware without regional restrictions.",{"category":92,"check":125,"severity":24,"summary":126},"Runtime stability","The skill relies on standard `curl` commands and API interactions, which are platform-agnostic.",{"category":45,"check":128,"severity":24,"summary":129},"README","The README provides a good overview of Hugging Face Skills and installation instructions.",{"category":33,"check":131,"severity":42,"summary":132},"Tool surface size","This is a skill, not a tool-based extension with multiple commands.",{"category":40,"check":134,"severity":42,"summary":135},"Overlapping near-synonym tools","This skill does not expose multiple tools.",{"category":45,"check":137,"severity":24,"summary":138},"Phantom features","All advertised capabilities, such as querying leaderboards and enriching model metadata, are detailed in the SKILL.md instructions.",{"category":140,"check":141,"severity":24,"summary":142},"Install","Installation instruction","The README provides clear installation instructions for various agents (Claude Code, Codex, Gemini CLI, Cursor) and usage examples.",{"category":144,"check":145,"severity":24,"summary":146},"Errors","Actionable error messages","The SKILL.md explicitly outlines error handling for missing leaderboards, model data, and ambiguous tasks, including recovery steps.",{"category":104,"check":148,"severity":24,"summary":149},"Pinned dependencies","The skill relies on standard tools like `curl` and `jq`, which are typically available on most systems. No specific version pinning is required for these external commands within the skill's context.",{"category":33,"check":151,"severity":42,"summary":152},"Dry-run preview","The skill is primarily for information retrieval and comparison, not state-changing operations, making a dry-run preview not applicable.",{"category":154,"check":155,"severity":24,"summary":156},"Protocol","Idempotent retry & timeouts","The skill handles API call errors and suggests retries where appropriate. The instructions implicitly rely on the agent's execution environment to manage timeouts.",{"category":118,"check":158,"severity":24,"summary":159},"Telemetry opt-in","The skill does not emit telemetry; its operations are purely local to the agent's execution environment and API interactions.",{"category":40,"check":161,"severity":24,"summary":162},"Precise Purpose","The skill clearly defines its purpose for recommending AI models, specifies target users (asking about best/top models), and lists explicit triggers and boundaries.",{"category":40,"check":164,"severity":24,"summary":165},"Concise Frontmatter","The frontmatter is concise, self-contained, and effectively summarizes the core capability and triggers.",{"category":45,"check":167,"severity":24,"summary":168},"Concise Body","The SKILL.md is well-structured, under 500 lines, and delegates deeper material to the instructions themselves rather than external files.",{"category":170,"check":171,"severity":24,"summary":172},"Context","Progressive Disclosure","The SKILL.md outlines the overall workflow and provides detailed steps inline, without needing to delegate to separate reference files for this scope of complexity.",{"category":170,"check":174,"severity":42,"summary":175},"Forked exploration","The skill is focused on information retrieval and does not involve deep exploration or code review requiring a forked context.",{"category":22,"check":177,"severity":24,"summary":178},"Usage examples","The SKILL.md provides clear, step-by-step instructions that function as examples of how the skill operates and what output to expect.",{"category":22,"check":180,"severity":24,"summary":181},"Edge cases","The SKILL.md explicitly documents failure modes like missing leaderboards or model data and provides recovery steps.",{"category":111,"check":183,"severity":42,"summary":184},"Tool Fallback","The skill does not require an external MCP server or custom tooling; it relies on standard API calls.",{"category":186,"check":187,"severity":24,"summary":188},"Safety","Halt on unexpected state","The SKILL.md explicitly instructs the agent to ask clarifying questions or note unavailable leaderboards, effectively halting on unexpected or ambiguous states.",{"category":92,"check":190,"severity":24,"summary":191},"Cross-skill coupling","The skill is self-contained and does not rely on other specific skills being loaded in the same session.",1778691241118,"This skill queries Hugging Face leaderboards to recommend the best AI models for a given task, considering hardware constraints and benchmark scores. It retrieves model metadata, filters by device compatibility, and presents a comparative table.",[195,196,197,198,199],"Queries Hugging Face benchmark leaderboards","Enriches results with model size and license data","Filters models based on device memory/VRAM constraints","Presents a comparison table of top-performing models","Flags API-only and locally-hostable models",[201,202,203,204],"Running models directly","Providing installation instructions for specific models","Evaluating models not present on Hugging Face leaderboards","Recommending models for tasks outside of AI/ML domains",[],[],"3.0.0","4.4.0","To help users find the most suitable AI models for their specific tasks and hardware constraints by leveraging Hugging Face's benchmark data.","The skill is exceptionally well-documented, robust in its error handling, and adheres to security best practices, making it a high-quality, verified extension.",99,"Excellent skill for finding and comparing AI models based on tasks and hardware.",[214,215,216,217,218,219],"huggingface","llm","model-recommendation","benchmarks","leaderboards","ai-models","global","verified",[223,224,225,226],"Finding the best LLM for coding tasks on a local machine.","Comparing top-performing vision models for image classification with specific VRAM limits.","Getting recommendations for multimodal models suitable for a cloud deployment.","Understanding which models are state-of-the-art for RAG based on benchmark scores.",[228,229,230,231,232,233,234],"Parse user request for task and device constraints","Find relevant benchmark datasets on Hugging Face","Fetch top models from selected benchmark leaderboards","Enrich model data with parameters, license, and size","Filter and rank models based on device compatibility and benchmark score","Output a comparison table and suggest a top pick","Ask for user preference on running the recommended model (local vs. HF Jobs)",{"codeQuality":236,"collectedAt":238,"documentation":239,"maintenance":242,"security":248,"testCoverage":250},{"hasLockfile":237},false,1778691223532,{"descriptionLength":240,"readmeSize":241},626,9821,{"closedIssues90d":243,"forks":244,"hasChangelog":237,"openIssues90d":245,"pushedAt":246,"stars":247},6,663,4,1778593131000,10482,{"hasNpmPackage":237,"license":249,"smitheryVerified":237},"Apache-2.0",{"hasCi":251,"hasTests":237},true,{"updatedAt":253},1778691241235,{"basePath":255,"githubOwner":214,"githubRepo":256,"locale":18,"slug":257,"type":258},"skills/huggingface-best","skills","huggingface-best","skill",{"_creationTime":260,"_id":261,"community":262,"display":263,"identity":268,"parentExtension":271,"providers":272,"relations":288,"tags":290,"workflow":291},1778690773482.486,"k175g1spb5757qt4tnj9cktcn986mshy",{"reviewCount":8},{"description":264,"installMethods":265,"name":267,"sourceUrl":14},"Agent Skills for AI/ML tasks including dataset creation, model training, evaluation, and research paper publishing on Hugging Face Hub",{"claudeCode":266},"huggingface-skills","Hugging Face Skills",{"basePath":269,"githubOwner":214,"githubRepo":256,"locale":18,"slug":256,"type":270},"","plugin",null,{"evaluate":273,"extract":283},{"promptVersionExtension":207,"promptVersionScoring":208,"score":274,"tags":275,"targetMarket":220,"tier":221},98,[214,276,277,278,279,280,281,282],"ai","ml","datasets","models","training","cli","python",{"commitSha":284,"license":249,"plugin":285},"HEAD",{"mcpCount":8,"provider":286,"skillCount":287},"classify",14,{"repoId":289},"kd72xwt5xnc0ktc4p7smzfcp3986m959",[276,281,278,214,277,279,282,280],{"evaluatedAt":292,"extractAt":293,"updatedAt":292},1778691185872,1778690773482,{"evaluate":295,"extract":297},{"promptVersionExtension":207,"promptVersionScoring":208,"score":211,"tags":296,"targetMarket":220,"tier":221},[214,215,216,217,218,219],{"commitSha":284,"license":249},{"parentExtensionId":261,"repoId":289},{"_creationTime":300,"_id":289,"identity":301,"providers":302,"workflow":741},1778689536128.5474,{"githubOwner":214,"githubRepo":256,"sourceUrl":14},{"classify":303,"discover":734,"github":737},{"commitSha":284,"extensions":304},[305,319,328,336,344,352,360,366,374,382,390,398,406,414,422,430,473,481,487,492,509,515,522,564,575,594,600,620,632,656,714],{"basePath":269,"description":264,"displayName":266,"installMethods":306,"rationale":307,"selectedPaths":308,"source":317,"sourceLanguage":18,"type":318},{"claudeCode":12},"marketplace.json at .claude-plugin/marketplace.json",[309,312,314],{"path":310,"priority":311},".claude-plugin/marketplace.json","mandatory",{"path":313,"priority":311},"README.md",{"path":315,"priority":316},"LICENSE","high","rule","marketplace",{"basePath":320,"description":321,"displayName":322,"installMethods":323,"rationale":324,"selectedPaths":325,"source":317,"sourceLanguage":18,"type":270},"skills/huggingface-llm-trainer","Train or fine-tune language models using TRL on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes hardware selection, cost estimation, Trackio monitoring, and Hub persistence.","huggingface-llm-trainer",{"claudeCode":322},"inline plugin source from marketplace.json at skills/huggingface-llm-trainer",[326],{"path":327,"priority":316},"SKILL.md",{"basePath":329,"description":330,"displayName":331,"installMethods":332,"rationale":333,"selectedPaths":334,"source":317,"sourceLanguage":18,"type":270},"skills/huggingface-local-models","Use to select models to run locally with llama.cpp and GGUF on CPU, Mac Metal, CUDA, or ROCm. Covers finding GGUFs, quant selection, running servers, exact GGUF file lookup, conversion, and OpenAI-compatible local serving.","huggingface-local-models",{"claudeCode":331},"inline plugin source from marketplace.json at skills/huggingface-local-models",[335],{"path":327,"priority":316},{"basePath":337,"description":338,"displayName":339,"installMethods":340,"rationale":341,"selectedPaths":342,"source":317,"sourceLanguage":18,"type":270},"skills/huggingface-paper-publisher","Publish and manage research papers on Hugging Face Hub. Supports creating paper pages, linking papers to models/datasets, claiming authorship, and generating professional markdown-based research articles.","huggingface-paper-publisher",{"claudeCode":339},"inline plugin source from marketplace.json at skills/huggingface-paper-publisher",[343],{"path":327,"priority":316},{"basePath":345,"description":346,"displayName":347,"installMethods":348,"rationale":349,"selectedPaths":350,"source":317,"sourceLanguage":18,"type":270},"skills/huggingface-papers","Look up and read Hugging Face paper pages in markdown, and use the papers API for structured metadata like authors, linked models, datasets, Spaces, and media URLs when needed.","huggingface-papers",{"claudeCode":347},"inline plugin source from marketplace.json at skills/huggingface-papers",[351],{"path":327,"priority":316},{"basePath":353,"description":354,"displayName":355,"installMethods":356,"rationale":357,"selectedPaths":358,"source":317,"sourceLanguage":18,"type":270},"skills/huggingface-community-evals","Add and manage evaluation results in Hugging Face model cards. Supports extracting eval tables from README content, importing scores from Artificial Analysis API, and running custom evaluations with vLLM/lighteval.","huggingface-community-evals",{"claudeCode":355},"inline plugin source from marketplace.json at skills/huggingface-community-evals",[359],{"path":327,"priority":316},{"basePath":255,"description":361,"displayName":257,"installMethods":362,"rationale":363,"selectedPaths":364,"source":317,"sourceLanguage":18,"type":270},"Find the best AI model for any task by querying Hugging Face leaderboards and benchmarks. Recommends top models based on task type, hardware constraints, and benchmark scores.",{"claudeCode":257},"inline plugin source from marketplace.json at skills/huggingface-best",[365],{"path":327,"priority":316},{"basePath":367,"description":368,"displayName":369,"installMethods":370,"rationale":371,"selectedPaths":372,"source":317,"sourceLanguage":18,"type":270},"skills/hf-cli","Execute Hugging Face Hub operations using the hf CLI. Download models/datasets, upload files, manage repos, and run cloud compute jobs.","hf-cli",{"claudeCode":369},"inline plugin source from marketplace.json at skills/hf-cli",[373],{"path":327,"priority":316},{"basePath":375,"description":376,"displayName":377,"installMethods":378,"rationale":379,"selectedPaths":380,"source":317,"sourceLanguage":18,"type":270},"skills/huggingface-trackio","Track and visualize ML training experiments with Trackio. Log metrics via Python API and retrieve them via CLI. Supports real-time dashboards synced to HF Spaces.","huggingface-trackio",{"claudeCode":377},"inline plugin source from marketplace.json at skills/huggingface-trackio",[381],{"path":327,"priority":316},{"basePath":383,"description":384,"displayName":385,"installMethods":386,"rationale":387,"selectedPaths":388,"source":317,"sourceLanguage":18,"type":270},"skills/huggingface-datasets","Explore, query, and extract data from any Hugging Face dataset using the Dataset Viewer REST API and npx tooling. Zero Python dependencies — covers split/config discovery, row pagination, text search, filtering, SQL via parquetlens, and dataset upload via CLI.","huggingface-datasets",{"claudeCode":385},"inline plugin source from marketplace.json at skills/huggingface-datasets",[389],{"path":327,"priority":316},{"basePath":391,"description":392,"displayName":393,"installMethods":394,"rationale":395,"selectedPaths":396,"source":317,"sourceLanguage":18,"type":270},"skills/huggingface-tool-builder","Build reusable scripts for Hugging Face Hub and API workflows. Useful for chaining API calls, enriching Hub metadata, or automating repeated tasks.","huggingface-tool-builder",{"claudeCode":393},"inline plugin source from marketplace.json at skills/huggingface-tool-builder",[397],{"path":327,"priority":316},{"basePath":399,"description":400,"displayName":401,"installMethods":402,"rationale":403,"selectedPaths":404,"source":317,"sourceLanguage":18,"type":270},"skills/huggingface-gradio","Build Gradio web UIs and demos in Python. Use when creating or editing Gradio apps, components, event listeners, layouts, or chatbots.","huggingface-gradio",{"claudeCode":401},"inline plugin source from marketplace.json at skills/huggingface-gradio",[405],{"path":327,"priority":316},{"basePath":407,"description":408,"displayName":409,"installMethods":410,"rationale":411,"selectedPaths":412,"source":317,"sourceLanguage":18,"type":270},"skills/transformers-js","Run state-of-the-art machine learning models directly in JavaScript/TypeScript for NLP, computer vision, audio processing, and multimodal tasks. Works in Node.js and browsers with WebGPU/WASM using Hugging Face models.","transformers-js",{"claudeCode":409},"inline plugin source from marketplace.json at skills/transformers-js",[413],{"path":327,"priority":316},{"basePath":415,"description":416,"displayName":417,"installMethods":418,"rationale":419,"selectedPaths":420,"source":317,"sourceLanguage":18,"type":270},"skills/huggingface-vision-trainer","Train and fine-tune object detection models (RTDETRv2, YOLOS, DETR and others) and image classification models (timm and transformers models — MobileNetV3, MobileViT, ResNet, ViT/DINOv3) using Transformers Trainer API on Hugging Face Jobs infrastructure or locally. Includes COCO dataset format support, Albumentations augmentation, mAP/mAR metrics, trackio tracking, hardware selection, and Hub persistence.","huggingface-vision-trainer",{"claudeCode":417},"inline plugin source from marketplace.json at skills/huggingface-vision-trainer",[421],{"path":327,"priority":316},{"basePath":423,"description":424,"displayName":425,"installMethods":426,"rationale":427,"selectedPaths":428,"source":317,"sourceLanguage":18,"type":270},"skills/train-sentence-transformers","Train or fine-tune sentence-transformers models across all three architectures: SentenceTransformer (bi-encoder embeddings), CrossEncoder (rerankers), and SparseEncoder (SPLADE). Covers loss selection, hard-negative mining, evaluators, distillation, LoRA, Matryoshka, and Hugging Face Hub publishing.","train-sentence-transformers",{"claudeCode":425},"inline plugin source from marketplace.json at skills/train-sentence-transformers",[429],{"path":327,"priority":316},{"basePath":269,"description":264,"displayName":266,"installMethods":431,"license":249,"rationale":432,"selectedPaths":433,"source":317,"sourceLanguage":18,"type":270},{"claudeCode":266},"plugin manifest at .claude-plugin/plugin.json",[434,436,437,438,441,443,445,447,449,451,453,455,457,459,461,463,465,467,469,471],{"path":435,"priority":311},".claude-plugin/plugin.json",{"path":313,"priority":311},{"path":315,"priority":316},{"path":439,"priority":440},"skills/hf-cli/SKILL.md","medium",{"path":442,"priority":440},"skills/huggingface-best/SKILL.md",{"path":444,"priority":440},"skills/huggingface-community-evals/SKILL.md",{"path":446,"priority":440},"skills/huggingface-datasets/SKILL.md",{"path":448,"priority":440},"skills/huggingface-gradio/SKILL.md",{"path":450,"priority":440},"skills/huggingface-llm-trainer/SKILL.md",{"path":452,"priority":440},"skills/huggingface-local-models/SKILL.md",{"path":454,"priority":440},"skills/huggingface-paper-publisher/SKILL.md",{"path":456,"priority":440},"skills/huggingface-papers/SKILL.md",{"path":458,"priority":440},"skills/huggingface-tool-builder/SKILL.md",{"path":460,"priority":440},"skills/huggingface-trackio/SKILL.md",{"path":462,"priority":440},"skills/huggingface-vision-trainer/SKILL.md",{"path":464,"priority":440},"skills/train-sentence-transformers/SKILL.md",{"path":466,"priority":440},"skills/transformers-js/SKILL.md",{"path":468,"priority":311},".mcp.json",{"path":470,"priority":316},"agents/AGENTS.md",{"path":472,"priority":316},".cursor-plugin/plugin.json",{"basePath":474,"description":475,"displayName":476,"installMethods":477,"rationale":478,"selectedPaths":479,"source":317,"sourceLanguage":18,"type":258},"hf-mcp/skills/hf-mcp","Use Hugging Face Hub via MCP server tools. Search models, datasets, Spaces, papers. Get repo details, fetch documentation, run compute jobs, and use Gradio Spaces as AI tools. Available when connected to the HF MCP server.","hf-mcp",{"claudeCode":12},"SKILL.md frontmatter at hf-mcp/skills/hf-mcp/SKILL.md",[480],{"path":327,"priority":311},{"basePath":367,"description":482,"displayName":369,"installMethods":483,"rationale":484,"selectedPaths":485,"source":317,"sourceLanguage":18,"type":258},"Hugging Face Hub CLI (`hf`) for downloading, uploading, and managing models, datasets, spaces, buckets, repos, papers, jobs, and more on the Hugging Face Hub. Use when: handling authentication; managing local cache; managing Hugging Face Buckets; running or scheduling jobs on Hugging Face infrastructure; managing Hugging Face repos; discussions and pull requests; browsing models, datasets and spaces; reading, searching, or browsing academic papers; managing collections; querying datasets; configuring spaces; setting up webhooks; or deploying and managing HF Inference Endpoints. Make sure to use this skill whenever the user mentions 'hf', 'huggingface', 'Hugging Face', 'huggingface-cli', or 'hugging face cli', or wants to do anything related to the Hugging Face ecosystem and to AI and ML in general. Also use for cloud storage needs like training checkpoints, data pipelines, or agent traces. Use even if the user doesn't explicitly ask for a CLI command. Replaces the deprecated `huggingface-cli`.",{"claudeCode":12},"SKILL.md frontmatter at skills/hf-cli/SKILL.md",[486],{"path":327,"priority":311},{"basePath":255,"description":10,"displayName":257,"installMethods":488,"rationale":489,"selectedPaths":490,"source":317,"sourceLanguage":18,"type":258},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-best/SKILL.md",[491],{"path":327,"priority":311},{"basePath":353,"description":493,"displayName":355,"installMethods":494,"rationale":495,"selectedPaths":496,"source":317,"sourceLanguage":18,"type":258},"Run evaluations for Hugging Face Hub models using inspect-ai and lighteval on local hardware. Use for backend selection, local GPU evals, and choosing between vLLM / Transformers / accelerate. Not for HF Jobs orchestration, model-card PRs, .eval_results publication, or community-evals automation.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-community-evals/SKILL.md",[497,498,501,503,505,507],{"path":327,"priority":311},{"path":499,"priority":500},"examples/.env.example","low",{"path":502,"priority":500},"examples/USAGE_EXAMPLES.md",{"path":504,"priority":500},"scripts/inspect_eval_uv.py",{"path":506,"priority":500},"scripts/inspect_vllm_uv.py",{"path":508,"priority":500},"scripts/lighteval_vllm_uv.py",{"basePath":383,"description":510,"displayName":385,"installMethods":511,"rationale":512,"selectedPaths":513,"source":317,"sourceLanguage":18,"type":258},"Use this skill for Hugging Face Dataset Viewer API workflows that fetch subset/split metadata, paginate rows, search text, apply filters, download parquet URLs, and read size or statistics.\r",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-datasets/SKILL.md",[514],{"path":327,"priority":311},{"basePath":399,"description":400,"displayName":401,"installMethods":516,"rationale":517,"selectedPaths":518,"source":317,"sourceLanguage":18,"type":258},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-gradio/SKILL.md",[519,520],{"path":327,"priority":311},{"path":521,"priority":440},"examples.md",{"basePath":320,"description":523,"displayName":322,"installMethods":524,"rationale":525,"selectedPaths":526,"source":317,"sourceLanguage":18,"type":258},"Train or fine-tune language and vision models using TRL (Transformer Reinforcement Learning) or Unsloth with Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, model selection/leaderboards and model persistence. Use for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-llm-trainer/SKILL.md",[527,528,530,532,534,536,538,540,542,544,546,548,550,552,554,556,558,560,562],{"path":327,"priority":311},{"path":529,"priority":440},"references/gguf_conversion.md",{"path":531,"priority":440},"references/hardware_guide.md",{"path":533,"priority":440},"references/hub_saving.md",{"path":535,"priority":440},"references/local_training_macos.md",{"path":537,"priority":440},"references/reliability_principles.md",{"path":539,"priority":440},"references/trackio_guide.md",{"path":541,"priority":440},"references/training_methods.md",{"path":543,"priority":440},"references/training_patterns.md",{"path":545,"priority":440},"references/troubleshooting.md",{"path":547,"priority":440},"references/unsloth.md",{"path":549,"priority":500},"scripts/convert_to_gguf.py",{"path":551,"priority":500},"scripts/dataset_inspector.py",{"path":553,"priority":500},"scripts/estimate_cost.py",{"path":555,"priority":500},"scripts/hf_benchmarks.py",{"path":557,"priority":500},"scripts/train_dpo_example.py",{"path":559,"priority":500},"scripts/train_grpo_example.py",{"path":561,"priority":500},"scripts/train_sft_example.py",{"path":563,"priority":500},"scripts/unsloth_sft_example.py",{"basePath":329,"description":330,"displayName":331,"installMethods":565,"rationale":566,"selectedPaths":567,"source":317,"sourceLanguage":18,"type":258},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-local-models/SKILL.md",[568,569,571,573],{"path":327,"priority":311},{"path":570,"priority":440},"references/hardware.md",{"path":572,"priority":440},"references/hub-discovery.md",{"path":574,"priority":440},"references/quantization.md",{"basePath":337,"description":338,"displayName":339,"installMethods":576,"rationale":577,"selectedPaths":578,"source":317,"sourceLanguage":18,"type":258},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-paper-publisher/SKILL.md",[579,580,582,584,586,588,590,592],{"path":327,"priority":311},{"path":581,"priority":500},"examples/example_usage.md",{"path":583,"priority":440},"references/quick_reference.md",{"path":585,"priority":500},"scripts/paper_manager.py",{"path":587,"priority":500},"templates/arxiv.md",{"path":589,"priority":500},"templates/ml-report.md",{"path":591,"priority":500},"templates/modern.md",{"path":593,"priority":500},"templates/standard.md",{"basePath":345,"description":595,"displayName":347,"installMethods":596,"rationale":597,"selectedPaths":598,"source":317,"sourceLanguage":18,"type":258},"Look up and read Hugging Face paper pages in markdown, and use the papers API for structured metadata such as authors, linked models/datasets/spaces, Github repo and project page. Use when the user shares a Hugging Face paper page URL, an arXiv URL or ID, or asks to summarize, explain, or analyze an AI research paper.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-papers/SKILL.md",[599],{"path":327,"priority":311},{"basePath":391,"description":601,"displayName":393,"installMethods":602,"rationale":603,"selectedPaths":604,"source":317,"sourceLanguage":18,"type":258},"Use this skill when the user wants to build tool/scripts or achieve a task where using data from the Hugging Face API would help. This is especially useful when chaining or combining API calls or the task will be repeated/automated. This Skill creates a reusable script to fetch, enrich or process data.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-tool-builder/SKILL.md",[605,606,608,610,612,614,616,618],{"path":327,"priority":311},{"path":607,"priority":440},"references/baseline_hf_api.py",{"path":609,"priority":440},"references/baseline_hf_api.sh",{"path":611,"priority":440},"references/baseline_hf_api.tsx",{"path":613,"priority":440},"references/find_models_by_paper.sh",{"path":615,"priority":440},"references/hf_enrich_models.sh",{"path":617,"priority":440},"references/hf_model_card_frontmatter.sh",{"path":619,"priority":440},"references/hf_model_papers_auth.sh",{"basePath":375,"description":621,"displayName":377,"installMethods":622,"rationale":623,"selectedPaths":624,"source":317,"sourceLanguage":18,"type":258},"Track and visualize ML training experiments with Trackio. Use when logging metrics during training (Python API), firing alerts for training diagnostics, or retrieving/analyzing logged metrics (CLI). Supports real-time dashboard visualization, alerts with webhooks, HF Space syncing, and JSON output for automation.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-trackio/SKILL.md",[625,626,628,630],{"path":327,"priority":311},{"path":627,"priority":440},"references/alerts.md",{"path":629,"priority":440},"references/logging_metrics.md",{"path":631,"priority":440},"references/retrieving_metrics.md",{"basePath":415,"description":633,"displayName":417,"installMethods":634,"rationale":635,"selectedPaths":636,"source":317,"sourceLanguage":18,"type":258},"Trains and fine-tunes vision models for object detection (D-FINE, RT-DETR v2, DETR, YOLOS), image classification (timm models — MobileNetV3, MobileViT, ResNet, ViT/DINOv3 — plus any Transformers classifier), and SAM/SAM2 segmentation using Hugging Face Transformers on Hugging Face Jobs cloud GPUs. Covers COCO-format dataset preparation, Albumentations augmentation, mAP/mAR evaluation, accuracy metrics, SAM segmentation with bbox/point prompts, DiceCE loss, hardware selection, cost estimation, Trackio monitoring, and Hub persistence. Use when users mention training object detection, image classification, SAM, SAM2, segmentation, image matting, DETR, D-FINE, RT-DETR, ViT, timm, MobileNet, ResNet, bounding box models, or fine-tuning vision models on Hugging Face Jobs.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-vision-trainer/SKILL.md",[637,638,640,641,643,645,646,648,649,650,652,654],{"path":327,"priority":311},{"path":639,"priority":440},"references/finetune_sam2_trainer.md",{"path":533,"priority":440},{"path":642,"priority":440},"references/image_classification_training_notebook.md",{"path":644,"priority":440},"references/object_detection_training_notebook.md",{"path":537,"priority":440},{"path":647,"priority":440},"references/timm_trainer.md",{"path":551,"priority":500},{"path":553,"priority":500},{"path":651,"priority":500},"scripts/image_classification_training.py",{"path":653,"priority":500},"scripts/object_detection_training.py",{"path":655,"priority":500},"scripts/sam_segmentation_training.py",{"basePath":423,"description":657,"displayName":425,"installMethods":658,"rationale":659,"selectedPaths":660,"source":317,"sourceLanguage":18,"type":258},"Train or fine-tune sentence-transformers models across `SentenceTransformer` (bi-encoder; dense or static embedding model; for retrieval, similarity, clustering, classification, paraphrase mining, dedup, multimodal), `CrossEncoder` (reranker; pair scoring for two-stage retrieval / pair classification), and `SparseEncoder` (SPLADE, sparse embedding model; for learned-sparse retrieval). Covers loss selection, hard-negative mining, evaluators, distillation, LoRA, Matryoshka, and Hugging Face Hub publishing. Use for any sentence-transformers training task.",{"claudeCode":12},"SKILL.md frontmatter at skills/train-sentence-transformers/SKILL.md",[661,662,664,666,668,670,672,673,675,677,679,681,683,685,687,688,690,692,694,696,698,700,702,704,706,708,710,712],{"path":327,"priority":311},{"path":663,"priority":440},"references/base_model_selection.md",{"path":665,"priority":440},"references/dataset_formats.md",{"path":667,"priority":440},"references/evaluators_cross_encoder.md",{"path":669,"priority":440},"references/evaluators_sentence_transformer.md",{"path":671,"priority":440},"references/evaluators_sparse_encoder.md",{"path":531,"priority":440},{"path":674,"priority":440},"references/hf_jobs_execution.md",{"path":676,"priority":440},"references/losses_cross_encoder.md",{"path":678,"priority":440},"references/losses_sentence_transformer.md",{"path":680,"priority":440},"references/losses_sparse_encoder.md",{"path":682,"priority":440},"references/model_architectures.md",{"path":684,"priority":440},"references/prompts_and_instructions.md",{"path":686,"priority":440},"references/training_args.md",{"path":545,"priority":440},{"path":689,"priority":500},"scripts/mine_hard_negatives.py",{"path":691,"priority":500},"scripts/train_cross_encoder_distillation_example.py",{"path":693,"priority":500},"scripts/train_cross_encoder_example.py",{"path":695,"priority":500},"scripts/train_cross_encoder_listwise_example.py",{"path":697,"priority":500},"scripts/train_sentence_transformer_distillation_example.py",{"path":699,"priority":500},"scripts/train_sentence_transformer_example.py",{"path":701,"priority":500},"scripts/train_sentence_transformer_make_multilingual_example.py",{"path":703,"priority":500},"scripts/train_sentence_transformer_matryoshka_example.py",{"path":705,"priority":500},"scripts/train_sentence_transformer_multi_dataset_example.py",{"path":707,"priority":500},"scripts/train_sentence_transformer_static_embedding_example.py",{"path":709,"priority":500},"scripts/train_sentence_transformer_with_lora_example.py",{"path":711,"priority":500},"scripts/train_sparse_encoder_distillation_example.py",{"path":713,"priority":500},"scripts/train_sparse_encoder_example.py",{"basePath":407,"description":715,"displayName":409,"installMethods":716,"rationale":717,"selectedPaths":718,"source":317,"sourceLanguage":18,"type":258},"Use Transformers.js to run state-of-the-art machine learning models directly in JavaScript/TypeScript. Supports NLP (text classification, translation, summarization), computer vision (image classification, object detection), audio (speech recognition, audio classification), and multimodal tasks. Works in browsers and server-side runtimes (Node.js, Bun, Deno) with WebGPU/WASM using pre-trained models from Hugging Face Hub.",{"claudeCode":12},"SKILL.md frontmatter at skills/transformers-js/SKILL.md",[719,720,722,724,726,728,730,732],{"path":327,"priority":311},{"path":721,"priority":440},"references/CACHE.md",{"path":723,"priority":440},"references/CONFIGURATION.md",{"path":725,"priority":440},"references/EXAMPLES.md",{"path":727,"priority":440},"references/MODEL_ARCHITECTURES.md",{"path":729,"priority":440},"references/MODEL_REGISTRY.md",{"path":731,"priority":440},"references/PIPELINE_OPTIONS.md",{"path":733,"priority":440},"references/TEXT_GENERATION.md",{"sources":735},[736],"manual",{"closedIssues90d":243,"description":738,"forks":244,"homepage":739,"license":249,"openIssues90d":245,"pushedAt":246,"readmeSize":241,"stars":247,"topics":740},"Give your agents the power of the Hugging Face ecosystem","https://huggingface.co",[],{"classifiedAt":742,"discoverAt":743,"extractAt":744,"githubAt":744,"updatedAt":742},1778690772996,1778689536128,1778690770714,[219,217,214,218,215,216],{"evaluatedAt":253,"extractAt":293,"updatedAt":253},[],[749,768,802,830,861,891],{"_creationTime":750,"_id":751,"community":752,"display":753,"identity":755,"providers":756,"relations":764,"tags":765,"workflow":766},1778690773482.4866,"k17a3mmgvm5hj49twj487hp64186n2qa",{"reviewCount":8},{"description":482,"installMethods":754,"name":369,"sourceUrl":14},{"claudeCode":12},{"basePath":367,"githubOwner":214,"githubRepo":256,"locale":18,"slug":369,"type":258},{"evaluate":757,"extract":763},{"promptVersionExtension":207,"promptVersionScoring":208,"score":758,"tags":759,"targetMarket":220,"tier":221},100,[281,214,760,761,762],"mlops","data-management","model-management",{"commitSha":284},{"parentExtensionId":261,"repoId":289},[281,761,214,760,762],{"evaluatedAt":767,"extractAt":293,"updatedAt":767},1778691223210,{"_creationTime":769,"_id":770,"community":771,"display":772,"identity":778,"providers":783,"relations":795,"tags":798,"workflow":799},1778696691708.3308,"k17d3c35ws96bb55ry97apwm5n86mqp2",{"reviewCount":8},{"description":773,"installMethods":774,"name":776,"sourceUrl":777},"Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval",{"claudeCode":775},"ruvnet/ruflo","Chat Format","https://github.com/ruvnet/ruflo",{"basePath":779,"githubOwner":780,"githubRepo":781,"locale":18,"slug":782,"type":258},"plugins/ruflo-ruvllm/skills/chat-format","ruvnet","ruflo","chat-format",{"evaluate":784,"extract":793},{"promptVersionExtension":207,"promptVersionScoring":208,"score":758,"tags":785,"targetMarket":220,"tier":221},[215,786,787,788,789,790,791,792],"prompting","rag","context-retrieval","openai","anthropic","gemini","ollama",{"commitSha":284,"license":794},"MIT",{"parentExtensionId":796,"repoId":797},"k17ekc0sj70ms9kgkkgr2ypr4s86mz40","kd7ed28gj8n0y3msk5dzrp05zs86nqtc",[790,788,791,215,792,789,786,787],{"evaluatedAt":800,"extractAt":801,"updatedAt":800},1778701390930,1778696691708,{"_creationTime":803,"_id":804,"community":805,"display":806,"identity":812,"providers":816,"relations":823,"tags":826,"workflow":827},1778699234184.611,"k179b6dkc777g1rgyecze04wqn86m6y4",{"reviewCount":8},{"description":807,"installMethods":808,"name":810,"sourceUrl":811},"Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly",{"claudeCode":809},"Yeachan-Heo/oh-my-claudecode","oh-my-claudecode","https://github.com/Yeachan-Heo/oh-my-claudecode",{"basePath":813,"githubOwner":814,"githubRepo":810,"locale":18,"slug":815,"type":258},"skills/ask","Yeachan-Heo","ask",{"evaluate":817,"extract":822},{"promptVersionExtension":207,"promptVersionScoring":208,"score":758,"tags":818,"targetMarket":220,"tier":221},[281,819,215,786,820,821],"automation","code-review","artifact-generation",{"commitSha":284,"license":794},{"parentExtensionId":824,"repoId":825},"k17brg5egdw1jbncj1j4wfv3fh86n639","kd74zv63fryf9prygtq7gf4es986n22y",[821,819,281,820,215,786],{"evaluatedAt":828,"extractAt":829,"updatedAt":828},1778699303045,1778699234184,{"_creationTime":831,"_id":832,"community":833,"display":834,"identity":840,"providers":845,"relations":854,"tags":857,"workflow":858},1778696595410.5698,"k171sdysmt658g1cdd7hgt8p8h86nms7",{"reviewCount":8},{"description":835,"installMethods":836,"name":838,"sourceUrl":839},"End-of-session ritual that audits changes, runs quality checks, captures learnings, and produces a session summary. Use when saying \"wrap up\", \"done for the day\", \"finish coding\", or ending a coding session.",{"claudeCode":837},"rohitg00/pro-workflow","Wrap-Up Ritual","https://github.com/rohitg00/pro-workflow",{"basePath":841,"githubOwner":842,"githubRepo":843,"locale":18,"slug":844,"type":258},"skills/wrap-up","rohitg00","pro-workflow","wrap-up",{"evaluate":846,"extract":853},{"promptVersionExtension":207,"promptVersionScoring":208,"score":758,"tags":847,"targetMarket":220,"tier":221},[848,215,849,850,851,852],"workflow","productivity","memory","knowledge-base","code-quality",{"commitSha":284,"license":794},{"parentExtensionId":855,"repoId":856},"k17fxtjcfh5gvxdrhv2dmgn1t986mdhv","kd7am4e918eq98hrd9s31jm4vs86nn0b",[852,851,215,850,849,848],{"evaluatedAt":859,"extractAt":860,"updatedAt":859},1778697164619,1778696595410,{"_creationTime":862,"_id":863,"community":864,"display":865,"identity":871,"providers":876,"relations":884,"tags":887,"workflow":888},1778694269038.6707,"k178ghjhvwyw1pv6vxnaqcwgyx86m2g7",{"reviewCount":8},{"description":866,"installMethods":867,"name":869,"sourceUrl":870},"This skill should be used when the user asks to \"start an LLM project\", \"design batch pipeline\", \"evaluate task-model fit\", \"structure agent project\", or mentions pipeline architecture, agent-assisted development, cost estimation, or choosing between LLM and traditional approaches.",{"claudeCode":868},"muratcankoylan/Agent-Skills-for-Context-Engineering","Project Development","https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering",{"basePath":872,"githubOwner":873,"githubRepo":874,"locale":18,"slug":875,"type":258},"skills/project-development","muratcankoylan","Agent-Skills-for-Context-Engineering","project-development",{"evaluate":877,"extract":883},{"promptVersionExtension":207,"promptVersionScoring":208,"score":758,"tags":878,"targetMarket":220,"tier":221},[215,879,880,881,882],"project-management","pipeline-architecture","agent-development","batch-processing",{"commitSha":284,"license":794},{"parentExtensionId":885,"repoId":886},"k1754dy3wbsv2a5gr1a983zzs586njca","kd7f12maf5nxmx5xttjx7scfnx86m1tv",[881,882,215,880,879],{"evaluatedAt":889,"extractAt":890,"updatedAt":889},1778694576171,1778694269038,{"_creationTime":892,"_id":893,"community":894,"display":895,"identity":899,"providers":902,"relations":911,"tags":912,"workflow":913},1778694269038.6682,"k1752cypc448mke749yjbkc65186mg6f",{"reviewCount":8},{"description":896,"installMethods":897,"name":898,"sourceUrl":870},"This skill should be used when the user asks to \"compress context\", \"summarize conversation history\", \"implement compaction\", \"reduce token usage\", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.",{"claudeCode":868},"Context Compression",{"basePath":900,"githubOwner":873,"githubRepo":874,"locale":18,"slug":901,"type":258},"skills/context-compression","context-compression",{"evaluate":903,"extract":910},{"promptVersionExtension":207,"promptVersionScoring":208,"score":758,"tags":904,"targetMarket":220,"tier":221},[905,215,906,907,908,909],"context-engineering","agent","summarization","compression","evaluation",{"commitSha":284,"license":794},{"parentExtensionId":885,"repoId":886},[906,908,905,909,215,907],{"evaluatedAt":914,"extractAt":890,"updatedAt":914},1778694410149]