[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"extension-skill-huggingface-huggingface-vision-trainer-de":3,"guides-for-huggingface-huggingface-vision-trainer":769,"similar-k171rcpcqjgapdaj3pg89j9tp186mf2w-de":770},{"_creationTime":4,"_id":5,"children":6,"community":7,"display":9,"evaluation":15,"identity":276,"isFallback":273,"parentExtension":281,"providers":316,"relations":320,"repo":321,"tags":767,"workflow":768},1778690773482.4893,"k171rcpcqjgapdaj3pg89j9tp186mf2w",[],{"reviewCount":8},0,{"description":10,"installMethods":11,"name":13,"sourceUrl":14},"Trains and fine-tunes vision models for object detection (D-FINE, RT-DETR v2, DETR, YOLOS), image classification (timm models — MobileNetV3, MobileViT, ResNet, ViT/DINOv3 — plus any Transformers classifier), and SAM/SAM2 segmentation using Hugging Face Transformers on Hugging Face Jobs cloud GPUs. Covers COCO-format dataset preparation, Albumentations augmentation, mAP/mAR evaluation, accuracy metrics, SAM segmentation with bbox/point prompts, DiceCE loss, hardware selection, cost estimation, Trackio monitoring, and Hub persistence. Use when users mention training object detection, image classification, SAM, SAM2, segmentation, image matting, DETR, D-FINE, RT-DETR, ViT, timm, MobileNet, ResNet, bounding box models, or fine-tuning vision models on Hugging Face Jobs.",{"claudeCode":12},"huggingface/skills","Hugging Face Vision Trainer","https://github.com/huggingface/skills",{"_creationTime":16,"_id":17,"extensionId":5,"locale":18,"result":19,"trustSignals":257,"workflow":274},1778691436038.3372,"kn732qkcjghpn7s28bk5bddcns86m3kn","en",{"checks":20,"evaluatedAt":209,"extensionSummary":210,"features":211,"nonGoals":217,"practices":221,"prerequisites":226,"promptVersionExtension":230,"promptVersionScoring":231,"purpose":232,"rationale":233,"score":234,"summary":235,"tags":236,"targetMarket":244,"tier":245,"useCases":246,"workflow":251},[21,26,29,32,36,39,43,46,50,54,58,61,64,68,72,76,78,80,82,84,86,88,90,92,94,97,100,103,106,110,114,117,121,124,127,130,133,136,139,142,145,149,153,157,160,163,166,170,173,176,179,182,185,188,192,196,199,202,206],{"category":22,"check":23,"severity":24,"summary":25},"Practical Utility","Problem relevance","pass","The description clearly states the problem of training vision models and specifies when to use the skill, mentioning specific model types and use cases.",{"category":22,"check":27,"severity":24,"summary":28},"Unique selling proposition","The skill offers significant value over a simple prompt by providing a structured workflow, specific scripts, and environment management for complex tasks like fine-tuning on cloud GPUs.",{"category":22,"check":30,"severity":24,"summary":31},"Production readiness","The skill provides end-to-end solutions including dataset preparation, model training, evaluation, and Hub persistence, making it suitable for production workflows.",{"category":33,"check":34,"severity":24,"summary":35},"Scope","Single responsibility principle","The skill focuses specifically on vision model training (object detection, classification, segmentation) and clearly delineates related tasks to other skills like `hugging-face-jobs`.",{"category":33,"check":37,"severity":24,"summary":38},"Description quality","The displayed description is accurate, concise, and well-structured, covering the skill's capabilities and usage scenarios effectively.",{"category":40,"check":41,"severity":24,"summary":42},"Invocation","Precise Purpose","The skill clearly defines its purpose and when to use it, specifying artifact types (models) and user intents (training, fine-tuning) with relevant triggers.",{"category":40,"check":44,"severity":24,"summary":45},"Concise Frontmatter","The frontmatter is concise and self-contained, effectively summarizing the core capabilities and providing trigger phrases for routing.",{"category":47,"check":48,"severity":24,"summary":49},"Documentation","Concise Body","The SKILL.md is well-organized with most detailed procedures and code examples delegated to separate reference files.",{"category":51,"check":52,"severity":24,"summary":53},"Context","Progressive Disclosure","Detailed workflows and specific guidance are provided in separate reference files, linked appropriately from the main SKILL.md.",{"category":51,"check":55,"severity":56,"summary":57},"Forked exploration","not_applicable","The skill does not appear to involve extensive exploration or multi-file inspection that would necessitate `context: fork`.",{"category":22,"check":59,"severity":24,"summary":60},"Usage examples","Multiple end-to-end examples are provided for object detection, image classification, and SAM segmentation, detailing inputs, invocations, and expected outcomes.",{"category":22,"check":62,"severity":24,"summary":63},"Edge cases","The documentation addresses common failure modes like OOM, dataset format errors, and Hub push failures, providing clear symptoms and recovery steps.",{"category":65,"check":66,"severity":56,"summary":67},"Code Execution","Tool Fallback","The skill does not appear to rely on external tools with the need for fallbacks; it primarily uses Hugging Face's internal libraries and infrastructure.",{"category":69,"check":70,"severity":24,"summary":71},"Safety","Halt on unexpected state","The included scripts and documentation emphasize verifying prerequisites and dataset formats, implying a halt on unexpected states.",{"category":73,"check":74,"severity":24,"summary":75},"Portability","Cross-skill coupling","The skill is self-contained for its vision training purpose and cross-links to related skills like `hugging-face-jobs` explicitly when necessary.",{"category":40,"check":44,"severity":24,"summary":77},"The frontmatter is concise and effectively summarizes the core capability, providing trigger phrases for precise routing.",{"category":47,"check":48,"severity":24,"summary":79},"The SKILL.md body is concise, delegating detailed material to separate files for progressive disclosure.",{"category":51,"check":52,"severity":24,"summary":81},"Long procedures and detailed information are split into reference files linked from SKILL.md.",{"category":51,"check":55,"severity":56,"summary":83},"The skill focuses on direct execution rather than deep exploration that would require forking.",{"category":22,"check":59,"severity":24,"summary":85},"Comprehensive and ready-to-use examples are provided for all core functionalities, covering inputs, invocations, and outcomes.",{"category":22,"check":62,"severity":24,"summary":87},"The documentation addresses common failure modes, limitations, and recovery steps for issues like OOM errors and dataset format mismatches.",{"category":65,"check":66,"severity":56,"summary":89},"The skill does not depend on optional external tools that would require fallbacks.",{"category":69,"check":70,"severity":24,"summary":91},"The skill emphasizes prerequisite verification and dataset validation, ensuring a halt on unexpected pre-states.",{"category":73,"check":74,"severity":24,"summary":93},"The skill is self-contained and clearly references related skills like `hugging-face-jobs` for additional functionalities.",{"category":33,"check":95,"severity":56,"summary":96},"Scoped tools","The skill operates via scripts and specific arguments rather than exposing individual tools in a manifest.",{"category":47,"check":98,"severity":24,"summary":99},"Configuration & parameter reference","All relevant parameters and configurations are documented within the SKILL.md and associated reference files, including default values and usage examples.",{"category":33,"check":101,"severity":56,"summary":102},"Tool naming","The skill uses command-line scripts and arguments, not explicitly named tools in a manifest.",{"category":33,"check":104,"severity":24,"summary":105},"Minimal I/O surface","Scripts use structured arguments and expected inputs/outputs, adhering to best practices for minimizing I/O surface.",{"category":107,"check":108,"severity":24,"summary":109},"License","License usability","The extension is licensed under Apache-2.0, a permissive open-source license, as indicated by the bundled LICENSE file.",{"category":111,"check":112,"severity":24,"summary":113},"Maintenance","Commit recency","The repository shows recent commits within the last 3 months, indicating active maintenance.",{"category":111,"check":115,"severity":24,"summary":116},"Dependency Management","Dependencies are managed via PEP 723 inline headers in scripts, ensuring proper pinning and reproducibility.",{"category":118,"check":119,"severity":24,"summary":120},"Security","Secret Management","The skill correctly handles Hugging Face tokens via job secrets, avoiding hardcoding and ensuring secure transmission.",{"category":118,"check":122,"severity":24,"summary":123},"Injection","The skill appears to handle data and scripts safely, with no indications of loading untrusted code or instructions from external sources.",{"category":118,"check":125,"severity":24,"summary":126},"Transitive Supply-Chain Grenades","The skill uses committed scripts and dependencies, avoiding runtime downloads or remote code execution that could pose supply-chain risks.",{"category":118,"check":128,"severity":24,"summary":129},"Sandbox Isolation","The skill operates within its designated scope and does not attempt to access or modify files outside the project folder.",{"category":118,"check":131,"severity":24,"summary":132},"Sandbox escape primitives","No detached process spawns or deny-retry loops were detected in the scripts.",{"category":118,"check":134,"severity":24,"summary":135},"Data Exfiltration","The skill does not contain instructions for reading or submitting confidential data to third parties, and outbound calls are documented or expected.",{"category":118,"check":137,"severity":24,"summary":138},"Hidden Text Tricks","The bundled content and descriptions are free of hidden steering tricks, Unicode tag characters, or other obfuscation methods.",{"category":118,"check":140,"severity":24,"summary":141},"Opaque code execution","All bundled scripts are in plain, readable source code format, with no obfuscation like base64 payloads or runtime downloads.",{"category":73,"check":143,"severity":24,"summary":144},"Structural Assumption","The skill uses relative paths and standard project structures, avoiding assumptions about user-specific file layouts.",{"category":146,"check":147,"severity":24,"summary":148},"Trust","Issues Attention","The maintainer engagement is high, with a good ratio of closed to open issues in the last 90 days.",{"category":150,"check":151,"severity":24,"summary":152},"Versioning","Release Management","Meaningful semver versioning is indicated via the SKILL.md frontmatter and the commit history.",{"category":154,"check":155,"severity":24,"summary":156},"Execution","Validation","Input arguments and script execution appear to be handled robustly, with clear error messages and validation checks.",{"category":118,"check":158,"severity":24,"summary":159},"Unguarded Destructive Operations","The skill is primarily analytical and does not perform destructive operations without appropriate safeguards.",{"category":65,"check":161,"severity":24,"summary":162},"Error Handling","Errors are caught, categorized, and reported with meaningful messages and hints for recovery.",{"category":65,"check":164,"severity":24,"summary":165},"Logging","The skill uses Trackio for logging and auditing of actions, providing a clear record of operations.",{"category":167,"check":168,"severity":56,"summary":169},"Compliance","GDPR","The skill does not operate on personal data, making GDPR compliance not applicable.",{"category":167,"check":171,"severity":24,"summary":172},"Target market","The skill is designed for global usability and does not exhibit regional or jurisdictional restrictions.",{"category":73,"check":174,"severity":24,"summary":175},"Runtime stability","The skill uses standard Python libraries and Hugging Face infrastructure, ensuring broad compatibility across environments.",{"category":47,"check":177,"severity":24,"summary":178},"README","The README file clearly outlines the purpose of the Hugging Face Skills repository and provides installation instructions.",{"category":33,"check":180,"severity":56,"summary":181},"Tool surface size","This is a skill operating via scripts, not a collection of individual tools.",{"category":40,"check":183,"severity":56,"summary":184},"Overlapping near-synonym tools","As the skill uses scripts and command-line arguments, there are no overlapping tool names to consider.",{"category":47,"check":186,"severity":24,"summary":187},"Phantom features","All advertised features and capabilities have corresponding implementations in the scripts and documentation.",{"category":189,"check":190,"severity":24,"summary":191},"Install","Installation instruction","Clear installation instructions are provided for multiple environments (Claude Code, Codex, Gemini CLI, Cursor), including copy-paste examples and authentication guidance.",{"category":193,"check":194,"severity":24,"summary":195},"Errors","Actionable error messages","Error messages in scripts and documentation are actionable, providing context on the failure and remediation steps.",{"category":154,"check":197,"severity":24,"summary":198},"Pinned dependencies","Dependencies are pinned via PEP 723 headers in the scripts, ensuring consistent execution environments.",{"category":33,"check":200,"severity":56,"summary":201},"Dry-run preview","The skill does not expose state-changing commands that would typically require a dry-run mode.",{"category":203,"check":204,"severity":24,"summary":205},"Protocol","Idempotent retry & timeouts","The script handling and job submission guidelines emphasize setting appropriate timeouts and handling failures, promoting reliable execution.",{"category":167,"check":207,"severity":24,"summary":208},"Telemetry opt-in","Telemetry is opt-in via Trackio, with clear documentation on its usage and purpose.",1778691435931,"This skill facilitates training and fine-tuning of vision models for object detection, image classification, and segmentation tasks using Hugging Face Transformers and Jobs. It covers dataset preparation, augmentation, evaluation, and Hub persistence.",[212,213,214,215,216],"Train object detection, image classification, and SAM/SAM2 segmentation models","Utilizes Hugging Face Transformers and Jobs for cloud-based training","Supports COCO dataset format, Albumentations augmentation, and mAP/mAR evaluation","Automates Hub persistence for trained models and includes cost/time estimation","Provides sample scripts and detailed guidance for various vision tasks",[218,219,220],"Training text or language models (use `hugging-face-llm-trainer` skill).","Managing local GPU environments or dependencies.","Providing general Hugging Face Hub operations (use `hf-cli` skill).",[222,223,224,225],"Model training workflows","Dataset preparation and validation","Cloud-based ML operations","Experiment tracking and monitoring",[227,228,229],"Hugging Face Account with Pro, Team, or Enterprise plan","Authenticated login (`hf_whoami`) with token having write permissions","Token must be passed in job secrets","3.0.0","4.4.0","Enables users to train and fine-tune vision models on cloud GPUs without local setup, abstracting away complex configurations and providing end-to-end workflow management.","The skill is exceptionally well-documented, provides robust examples, and adheres to best practices for reliability, security, and usability. All checks passed or were not applicable.",99,"Comprehensive and production-ready skill for vision model training on Hugging Face Jobs.",[237,238,239,240,241,242,243],"computer-vision","object-detection","image-classification","segmentation","deep-learning","huggingface","cloud-training","global","verified",[247,248,249,250],"Fine-tuning object detection models like D-FINE or RT-DETR on custom datasets.","Training image classification models using timm or Transformers classifiers on cloud GPUs.","Fine-tuning SAM/SAM2 for segmentation tasks using bounding box or point prompts.","Estimating training time and cost before launching a job.",[252,253,254,255,256],"Verify prerequisites (account, token, dataset).","Validate dataset format using provided inspector script.","Ask user about dataset size preferences and validation split.","Prepare training script based on task (OD, IC, SAM).","Save script locally, submit job via `hf_jobs` or `HfApi`, and report details.",{"codeQuality":258,"collectedAt":260,"documentation":261,"maintenance":264,"security":270,"testCoverage":272},{"hasLockfile":259},false,1778691424708,{"descriptionLength":262,"readmeSize":263},775,9821,{"closedIssues90d":265,"forks":266,"hasChangelog":259,"openIssues90d":267,"pushedAt":268,"stars":269},6,663,4,1778593131000,10482,{"hasNpmPackage":259,"license":271,"smitheryVerified":259},"Apache-2.0",{"hasCi":273,"hasTests":259},true,{"updatedAt":275},1778691436038,{"basePath":277,"githubOwner":242,"githubRepo":278,"locale":18,"slug":279,"type":280},"skills/huggingface-vision-trainer","skills","huggingface-vision-trainer","skill",{"_creationTime":282,"_id":283,"community":284,"display":285,"identity":290,"parentExtension":293,"providers":294,"relations":310,"tags":312,"workflow":313},1778690773482.486,"k175g1spb5757qt4tnj9cktcn986mshy",{"reviewCount":8},{"description":286,"installMethods":287,"name":289,"sourceUrl":14},"Agent Skills for AI/ML tasks including dataset creation, model training, evaluation, and research paper publishing on Hugging Face Hub",{"claudeCode":288},"huggingface-skills","Hugging Face Skills",{"basePath":291,"githubOwner":242,"githubRepo":278,"locale":18,"slug":278,"type":292},"","plugin",null,{"evaluate":295,"extract":305},{"promptVersionExtension":230,"promptVersionScoring":231,"score":296,"tags":297,"targetMarket":244,"tier":245},98,[242,298,299,300,301,302,303,304],"ai","ml","datasets","models","training","cli","python",{"commitSha":306,"license":271,"plugin":307},"HEAD",{"mcpCount":8,"provider":308,"skillCount":309},"classify",14,{"repoId":311},"kd72xwt5xnc0ktc4p7smzfcp3986m959",[298,303,300,242,299,301,304,302],{"evaluatedAt":314,"extractAt":315,"updatedAt":314},1778691185872,1778690773482,{"evaluate":317,"extract":319},{"promptVersionExtension":230,"promptVersionScoring":231,"score":234,"tags":318,"targetMarket":244,"tier":245},[237,238,239,240,241,242,243],{"commitSha":306,"license":271},{"parentExtensionId":283,"repoId":311},{"_creationTime":322,"_id":311,"identity":323,"providers":324,"workflow":763},1778689536128.5474,{"githubOwner":242,"githubRepo":278,"sourceUrl":14},{"classify":325,"discover":756,"github":759},{"commitSha":306,"extensions":326},[327,341,350,358,366,374,382,390,398,406,414,422,430,438,444,452,495,503,509,515,532,538,545,587,598,617,623,643,655,678,736],{"basePath":291,"description":286,"displayName":288,"installMethods":328,"rationale":329,"selectedPaths":330,"source":339,"sourceLanguage":18,"type":340},{"claudeCode":12},"marketplace.json at .claude-plugin/marketplace.json",[331,334,336],{"path":332,"priority":333},".claude-plugin/marketplace.json","mandatory",{"path":335,"priority":333},"README.md",{"path":337,"priority":338},"LICENSE","high","rule","marketplace",{"basePath":342,"description":343,"displayName":344,"installMethods":345,"rationale":346,"selectedPaths":347,"source":339,"sourceLanguage":18,"type":292},"skills/huggingface-llm-trainer","Train or fine-tune language models using TRL on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes hardware selection, cost estimation, Trackio monitoring, and Hub persistence.","huggingface-llm-trainer",{"claudeCode":344},"inline plugin source from marketplace.json at skills/huggingface-llm-trainer",[348],{"path":349,"priority":338},"SKILL.md",{"basePath":351,"description":352,"displayName":353,"installMethods":354,"rationale":355,"selectedPaths":356,"source":339,"sourceLanguage":18,"type":292},"skills/huggingface-local-models","Use to select models to run locally with llama.cpp and GGUF on CPU, Mac Metal, CUDA, or ROCm. Covers finding GGUFs, quant selection, running servers, exact GGUF file lookup, conversion, and OpenAI-compatible local serving.","huggingface-local-models",{"claudeCode":353},"inline plugin source from marketplace.json at skills/huggingface-local-models",[357],{"path":349,"priority":338},{"basePath":359,"description":360,"displayName":361,"installMethods":362,"rationale":363,"selectedPaths":364,"source":339,"sourceLanguage":18,"type":292},"skills/huggingface-paper-publisher","Publish and manage research papers on Hugging Face Hub. Supports creating paper pages, linking papers to models/datasets, claiming authorship, and generating professional markdown-based research articles.","huggingface-paper-publisher",{"claudeCode":361},"inline plugin source from marketplace.json at skills/huggingface-paper-publisher",[365],{"path":349,"priority":338},{"basePath":367,"description":368,"displayName":369,"installMethods":370,"rationale":371,"selectedPaths":372,"source":339,"sourceLanguage":18,"type":292},"skills/huggingface-papers","Look up and read Hugging Face paper pages in markdown, and use the papers API for structured metadata like authors, linked models, datasets, Spaces, and media URLs when needed.","huggingface-papers",{"claudeCode":369},"inline plugin source from marketplace.json at skills/huggingface-papers",[373],{"path":349,"priority":338},{"basePath":375,"description":376,"displayName":377,"installMethods":378,"rationale":379,"selectedPaths":380,"source":339,"sourceLanguage":18,"type":292},"skills/huggingface-community-evals","Add and manage evaluation results in Hugging Face model cards. Supports extracting eval tables from README content, importing scores from Artificial Analysis API, and running custom evaluations with vLLM/lighteval.","huggingface-community-evals",{"claudeCode":377},"inline plugin source from marketplace.json at skills/huggingface-community-evals",[381],{"path":349,"priority":338},{"basePath":383,"description":384,"displayName":385,"installMethods":386,"rationale":387,"selectedPaths":388,"source":339,"sourceLanguage":18,"type":292},"skills/huggingface-best","Find the best AI model for any task by querying Hugging Face leaderboards and benchmarks. Recommends top models based on task type, hardware constraints, and benchmark scores.","huggingface-best",{"claudeCode":385},"inline plugin source from marketplace.json at skills/huggingface-best",[389],{"path":349,"priority":338},{"basePath":391,"description":392,"displayName":393,"installMethods":394,"rationale":395,"selectedPaths":396,"source":339,"sourceLanguage":18,"type":292},"skills/hf-cli","Execute Hugging Face Hub operations using the hf CLI. Download models/datasets, upload files, manage repos, and run cloud compute jobs.","hf-cli",{"claudeCode":393},"inline plugin source from marketplace.json at skills/hf-cli",[397],{"path":349,"priority":338},{"basePath":399,"description":400,"displayName":401,"installMethods":402,"rationale":403,"selectedPaths":404,"source":339,"sourceLanguage":18,"type":292},"skills/huggingface-trackio","Track and visualize ML training experiments with Trackio. Log metrics via Python API and retrieve them via CLI. Supports real-time dashboards synced to HF Spaces.","huggingface-trackio",{"claudeCode":401},"inline plugin source from marketplace.json at skills/huggingface-trackio",[405],{"path":349,"priority":338},{"basePath":407,"description":408,"displayName":409,"installMethods":410,"rationale":411,"selectedPaths":412,"source":339,"sourceLanguage":18,"type":292},"skills/huggingface-datasets","Explore, query, and extract data from any Hugging Face dataset using the Dataset Viewer REST API and npx tooling. Zero Python dependencies — covers split/config discovery, row pagination, text search, filtering, SQL via parquetlens, and dataset upload via CLI.","huggingface-datasets",{"claudeCode":409},"inline plugin source from marketplace.json at skills/huggingface-datasets",[413],{"path":349,"priority":338},{"basePath":415,"description":416,"displayName":417,"installMethods":418,"rationale":419,"selectedPaths":420,"source":339,"sourceLanguage":18,"type":292},"skills/huggingface-tool-builder","Build reusable scripts for Hugging Face Hub and API workflows. Useful for chaining API calls, enriching Hub metadata, or automating repeated tasks.","huggingface-tool-builder",{"claudeCode":417},"inline plugin source from marketplace.json at skills/huggingface-tool-builder",[421],{"path":349,"priority":338},{"basePath":423,"description":424,"displayName":425,"installMethods":426,"rationale":427,"selectedPaths":428,"source":339,"sourceLanguage":18,"type":292},"skills/huggingface-gradio","Build Gradio web UIs and demos in Python. Use when creating or editing Gradio apps, components, event listeners, layouts, or chatbots.","huggingface-gradio",{"claudeCode":425},"inline plugin source from marketplace.json at skills/huggingface-gradio",[429],{"path":349,"priority":338},{"basePath":431,"description":432,"displayName":433,"installMethods":434,"rationale":435,"selectedPaths":436,"source":339,"sourceLanguage":18,"type":292},"skills/transformers-js","Run state-of-the-art machine learning models directly in JavaScript/TypeScript for NLP, computer vision, audio processing, and multimodal tasks. Works in Node.js and browsers with WebGPU/WASM using Hugging Face models.","transformers-js",{"claudeCode":433},"inline plugin source from marketplace.json at skills/transformers-js",[437],{"path":349,"priority":338},{"basePath":277,"description":439,"displayName":279,"installMethods":440,"rationale":441,"selectedPaths":442,"source":339,"sourceLanguage":18,"type":292},"Train and fine-tune object detection models (RTDETRv2, YOLOS, DETR and others) and image classification models (timm and transformers models — MobileNetV3, MobileViT, ResNet, ViT/DINOv3) using Transformers Trainer API on Hugging Face Jobs infrastructure or locally. Includes COCO dataset format support, Albumentations augmentation, mAP/mAR metrics, trackio tracking, hardware selection, and Hub persistence.",{"claudeCode":279},"inline plugin source from marketplace.json at skills/huggingface-vision-trainer",[443],{"path":349,"priority":338},{"basePath":445,"description":446,"displayName":447,"installMethods":448,"rationale":449,"selectedPaths":450,"source":339,"sourceLanguage":18,"type":292},"skills/train-sentence-transformers","Train or fine-tune sentence-transformers models across all three architectures: SentenceTransformer (bi-encoder embeddings), CrossEncoder (rerankers), and SparseEncoder (SPLADE). Covers loss selection, hard-negative mining, evaluators, distillation, LoRA, Matryoshka, and Hugging Face Hub publishing.","train-sentence-transformers",{"claudeCode":447},"inline plugin source from marketplace.json at skills/train-sentence-transformers",[451],{"path":349,"priority":338},{"basePath":291,"description":286,"displayName":288,"installMethods":453,"license":271,"rationale":454,"selectedPaths":455,"source":339,"sourceLanguage":18,"type":292},{"claudeCode":288},"plugin manifest at .claude-plugin/plugin.json",[456,458,459,460,463,465,467,469,471,473,475,477,479,481,483,485,487,489,491,493],{"path":457,"priority":333},".claude-plugin/plugin.json",{"path":335,"priority":333},{"path":337,"priority":338},{"path":461,"priority":462},"skills/hf-cli/SKILL.md","medium",{"path":464,"priority":462},"skills/huggingface-best/SKILL.md",{"path":466,"priority":462},"skills/huggingface-community-evals/SKILL.md",{"path":468,"priority":462},"skills/huggingface-datasets/SKILL.md",{"path":470,"priority":462},"skills/huggingface-gradio/SKILL.md",{"path":472,"priority":462},"skills/huggingface-llm-trainer/SKILL.md",{"path":474,"priority":462},"skills/huggingface-local-models/SKILL.md",{"path":476,"priority":462},"skills/huggingface-paper-publisher/SKILL.md",{"path":478,"priority":462},"skills/huggingface-papers/SKILL.md",{"path":480,"priority":462},"skills/huggingface-tool-builder/SKILL.md",{"path":482,"priority":462},"skills/huggingface-trackio/SKILL.md",{"path":484,"priority":462},"skills/huggingface-vision-trainer/SKILL.md",{"path":486,"priority":462},"skills/train-sentence-transformers/SKILL.md",{"path":488,"priority":462},"skills/transformers-js/SKILL.md",{"path":490,"priority":333},".mcp.json",{"path":492,"priority":338},"agents/AGENTS.md",{"path":494,"priority":338},".cursor-plugin/plugin.json",{"basePath":496,"description":497,"displayName":498,"installMethods":499,"rationale":500,"selectedPaths":501,"source":339,"sourceLanguage":18,"type":280},"hf-mcp/skills/hf-mcp","Use Hugging Face Hub via MCP server tools. Search models, datasets, Spaces, papers. Get repo details, fetch documentation, run compute jobs, and use Gradio Spaces as AI tools. Available when connected to the HF MCP server.","hf-mcp",{"claudeCode":12},"SKILL.md frontmatter at hf-mcp/skills/hf-mcp/SKILL.md",[502],{"path":349,"priority":333},{"basePath":391,"description":504,"displayName":393,"installMethods":505,"rationale":506,"selectedPaths":507,"source":339,"sourceLanguage":18,"type":280},"Hugging Face Hub CLI (`hf`) for downloading, uploading, and managing models, datasets, spaces, buckets, repos, papers, jobs, and more on the Hugging Face Hub. Use when: handling authentication; managing local cache; managing Hugging Face Buckets; running or scheduling jobs on Hugging Face infrastructure; managing Hugging Face repos; discussions and pull requests; browsing models, datasets and spaces; reading, searching, or browsing academic papers; managing collections; querying datasets; configuring spaces; setting up webhooks; or deploying and managing HF Inference Endpoints. Make sure to use this skill whenever the user mentions 'hf', 'huggingface', 'Hugging Face', 'huggingface-cli', or 'hugging face cli', or wants to do anything related to the Hugging Face ecosystem and to AI and ML in general. Also use for cloud storage needs like training checkpoints, data pipelines, or agent traces. Use even if the user doesn't explicitly ask for a CLI command. Replaces the deprecated `huggingface-cli`.",{"claudeCode":12},"SKILL.md frontmatter at skills/hf-cli/SKILL.md",[508],{"path":349,"priority":333},{"basePath":383,"description":510,"displayName":385,"installMethods":511,"rationale":512,"selectedPaths":513,"source":339,"sourceLanguage":18,"type":280},"Use when the user asks about finding the best, top, or recommended model for a task, wants to know what AI model to use, or wants to compare models by benchmark scores. Triggers on: \"best model for X\", \"what model should I use for\", \"top models for [task]\", \"which model runs on my laptop/machine/device\", \"recommend a model for\", \"what LLM should I use for\", \"compare models for\", \"what's state of the art for\", or any question about choosing an AI model for a specific use case. Always use this skill when the user wants model recommendations or comparisons, even if they don't explicitly mention HuggingFace or benchmarks.\n",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-best/SKILL.md",[514],{"path":349,"priority":333},{"basePath":375,"description":516,"displayName":377,"installMethods":517,"rationale":518,"selectedPaths":519,"source":339,"sourceLanguage":18,"type":280},"Run evaluations for Hugging Face Hub models using inspect-ai and lighteval on local hardware. Use for backend selection, local GPU evals, and choosing between vLLM / Transformers / accelerate. Not for HF Jobs orchestration, model-card PRs, .eval_results publication, or community-evals automation.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-community-evals/SKILL.md",[520,521,524,526,528,530],{"path":349,"priority":333},{"path":522,"priority":523},"examples/.env.example","low",{"path":525,"priority":523},"examples/USAGE_EXAMPLES.md",{"path":527,"priority":523},"scripts/inspect_eval_uv.py",{"path":529,"priority":523},"scripts/inspect_vllm_uv.py",{"path":531,"priority":523},"scripts/lighteval_vllm_uv.py",{"basePath":407,"description":533,"displayName":409,"installMethods":534,"rationale":535,"selectedPaths":536,"source":339,"sourceLanguage":18,"type":280},"Use this skill for Hugging Face Dataset Viewer API workflows that fetch subset/split metadata, paginate rows, search text, apply filters, download parquet URLs, and read size or statistics.\r",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-datasets/SKILL.md",[537],{"path":349,"priority":333},{"basePath":423,"description":424,"displayName":425,"installMethods":539,"rationale":540,"selectedPaths":541,"source":339,"sourceLanguage":18,"type":280},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-gradio/SKILL.md",[542,543],{"path":349,"priority":333},{"path":544,"priority":462},"examples.md",{"basePath":342,"description":546,"displayName":344,"installMethods":547,"rationale":548,"selectedPaths":549,"source":339,"sourceLanguage":18,"type":280},"Train or fine-tune language and vision models using TRL (Transformer Reinforcement Learning) or Unsloth with Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, model selection/leaderboards and model persistence. Use for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-llm-trainer/SKILL.md",[550,551,553,555,557,559,561,563,565,567,569,571,573,575,577,579,581,583,585],{"path":349,"priority":333},{"path":552,"priority":462},"references/gguf_conversion.md",{"path":554,"priority":462},"references/hardware_guide.md",{"path":556,"priority":462},"references/hub_saving.md",{"path":558,"priority":462},"references/local_training_macos.md",{"path":560,"priority":462},"references/reliability_principles.md",{"path":562,"priority":462},"references/trackio_guide.md",{"path":564,"priority":462},"references/training_methods.md",{"path":566,"priority":462},"references/training_patterns.md",{"path":568,"priority":462},"references/troubleshooting.md",{"path":570,"priority":462},"references/unsloth.md",{"path":572,"priority":523},"scripts/convert_to_gguf.py",{"path":574,"priority":523},"scripts/dataset_inspector.py",{"path":576,"priority":523},"scripts/estimate_cost.py",{"path":578,"priority":523},"scripts/hf_benchmarks.py",{"path":580,"priority":523},"scripts/train_dpo_example.py",{"path":582,"priority":523},"scripts/train_grpo_example.py",{"path":584,"priority":523},"scripts/train_sft_example.py",{"path":586,"priority":523},"scripts/unsloth_sft_example.py",{"basePath":351,"description":352,"displayName":353,"installMethods":588,"rationale":589,"selectedPaths":590,"source":339,"sourceLanguage":18,"type":280},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-local-models/SKILL.md",[591,592,594,596],{"path":349,"priority":333},{"path":593,"priority":462},"references/hardware.md",{"path":595,"priority":462},"references/hub-discovery.md",{"path":597,"priority":462},"references/quantization.md",{"basePath":359,"description":360,"displayName":361,"installMethods":599,"rationale":600,"selectedPaths":601,"source":339,"sourceLanguage":18,"type":280},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-paper-publisher/SKILL.md",[602,603,605,607,609,611,613,615],{"path":349,"priority":333},{"path":604,"priority":523},"examples/example_usage.md",{"path":606,"priority":462},"references/quick_reference.md",{"path":608,"priority":523},"scripts/paper_manager.py",{"path":610,"priority":523},"templates/arxiv.md",{"path":612,"priority":523},"templates/ml-report.md",{"path":614,"priority":523},"templates/modern.md",{"path":616,"priority":523},"templates/standard.md",{"basePath":367,"description":618,"displayName":369,"installMethods":619,"rationale":620,"selectedPaths":621,"source":339,"sourceLanguage":18,"type":280},"Look up and read Hugging Face paper pages in markdown, and use the papers API for structured metadata such as authors, linked models/datasets/spaces, Github repo and project page. Use when the user shares a Hugging Face paper page URL, an arXiv URL or ID, or asks to summarize, explain, or analyze an AI research paper.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-papers/SKILL.md",[622],{"path":349,"priority":333},{"basePath":415,"description":624,"displayName":417,"installMethods":625,"rationale":626,"selectedPaths":627,"source":339,"sourceLanguage":18,"type":280},"Use this skill when the user wants to build tool/scripts or achieve a task where using data from the Hugging Face API would help. This is especially useful when chaining or combining API calls or the task will be repeated/automated. This Skill creates a reusable script to fetch, enrich or process data.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-tool-builder/SKILL.md",[628,629,631,633,635,637,639,641],{"path":349,"priority":333},{"path":630,"priority":462},"references/baseline_hf_api.py",{"path":632,"priority":462},"references/baseline_hf_api.sh",{"path":634,"priority":462},"references/baseline_hf_api.tsx",{"path":636,"priority":462},"references/find_models_by_paper.sh",{"path":638,"priority":462},"references/hf_enrich_models.sh",{"path":640,"priority":462},"references/hf_model_card_frontmatter.sh",{"path":642,"priority":462},"references/hf_model_papers_auth.sh",{"basePath":399,"description":644,"displayName":401,"installMethods":645,"rationale":646,"selectedPaths":647,"source":339,"sourceLanguage":18,"type":280},"Track and visualize ML training experiments with Trackio. Use when logging metrics during training (Python API), firing alerts for training diagnostics, or retrieving/analyzing logged metrics (CLI). Supports real-time dashboard visualization, alerts with webhooks, HF Space syncing, and JSON output for automation.",{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-trackio/SKILL.md",[648,649,651,653],{"path":349,"priority":333},{"path":650,"priority":462},"references/alerts.md",{"path":652,"priority":462},"references/logging_metrics.md",{"path":654,"priority":462},"references/retrieving_metrics.md",{"basePath":277,"description":10,"displayName":279,"installMethods":656,"rationale":657,"selectedPaths":658,"source":339,"sourceLanguage":18,"type":280},{"claudeCode":12},"SKILL.md frontmatter at skills/huggingface-vision-trainer/SKILL.md",[659,660,662,663,665,667,668,670,671,672,674,676],{"path":349,"priority":333},{"path":661,"priority":462},"references/finetune_sam2_trainer.md",{"path":556,"priority":462},{"path":664,"priority":462},"references/image_classification_training_notebook.md",{"path":666,"priority":462},"references/object_detection_training_notebook.md",{"path":560,"priority":462},{"path":669,"priority":462},"references/timm_trainer.md",{"path":574,"priority":523},{"path":576,"priority":523},{"path":673,"priority":523},"scripts/image_classification_training.py",{"path":675,"priority":523},"scripts/object_detection_training.py",{"path":677,"priority":523},"scripts/sam_segmentation_training.py",{"basePath":445,"description":679,"displayName":447,"installMethods":680,"rationale":681,"selectedPaths":682,"source":339,"sourceLanguage":18,"type":280},"Train or fine-tune sentence-transformers models across `SentenceTransformer` (bi-encoder; dense or static embedding model; for retrieval, similarity, clustering, classification, paraphrase mining, dedup, multimodal), `CrossEncoder` (reranker; pair scoring for two-stage retrieval / pair classification), and `SparseEncoder` (SPLADE, sparse embedding model; for learned-sparse retrieval). Covers loss selection, hard-negative mining, evaluators, distillation, LoRA, Matryoshka, and Hugging Face Hub publishing. Use for any sentence-transformers training task.",{"claudeCode":12},"SKILL.md frontmatter at skills/train-sentence-transformers/SKILL.md",[683,684,686,688,690,692,694,695,697,699,701,703,705,707,709,710,712,714,716,718,720,722,724,726,728,730,732,734],{"path":349,"priority":333},{"path":685,"priority":462},"references/base_model_selection.md",{"path":687,"priority":462},"references/dataset_formats.md",{"path":689,"priority":462},"references/evaluators_cross_encoder.md",{"path":691,"priority":462},"references/evaluators_sentence_transformer.md",{"path":693,"priority":462},"references/evaluators_sparse_encoder.md",{"path":554,"priority":462},{"path":696,"priority":462},"references/hf_jobs_execution.md",{"path":698,"priority":462},"references/losses_cross_encoder.md",{"path":700,"priority":462},"references/losses_sentence_transformer.md",{"path":702,"priority":462},"references/losses_sparse_encoder.md",{"path":704,"priority":462},"references/model_architectures.md",{"path":706,"priority":462},"references/prompts_and_instructions.md",{"path":708,"priority":462},"references/training_args.md",{"path":568,"priority":462},{"path":711,"priority":523},"scripts/mine_hard_negatives.py",{"path":713,"priority":523},"scripts/train_cross_encoder_distillation_example.py",{"path":715,"priority":523},"scripts/train_cross_encoder_example.py",{"path":717,"priority":523},"scripts/train_cross_encoder_listwise_example.py",{"path":719,"priority":523},"scripts/train_sentence_transformer_distillation_example.py",{"path":721,"priority":523},"scripts/train_sentence_transformer_example.py",{"path":723,"priority":523},"scripts/train_sentence_transformer_make_multilingual_example.py",{"path":725,"priority":523},"scripts/train_sentence_transformer_matryoshka_example.py",{"path":727,"priority":523},"scripts/train_sentence_transformer_multi_dataset_example.py",{"path":729,"priority":523},"scripts/train_sentence_transformer_static_embedding_example.py",{"path":731,"priority":523},"scripts/train_sentence_transformer_with_lora_example.py",{"path":733,"priority":523},"scripts/train_sparse_encoder_distillation_example.py",{"path":735,"priority":523},"scripts/train_sparse_encoder_example.py",{"basePath":431,"description":737,"displayName":433,"installMethods":738,"rationale":739,"selectedPaths":740,"source":339,"sourceLanguage":18,"type":280},"Use Transformers.js to run state-of-the-art machine learning models directly in JavaScript/TypeScript. Supports NLP (text classification, translation, summarization), computer vision (image classification, object detection), audio (speech recognition, audio classification), and multimodal tasks. Works in browsers and server-side runtimes (Node.js, Bun, Deno) with WebGPU/WASM using pre-trained models from Hugging Face Hub.",{"claudeCode":12},"SKILL.md frontmatter at skills/transformers-js/SKILL.md",[741,742,744,746,748,750,752,754],{"path":349,"priority":333},{"path":743,"priority":462},"references/CACHE.md",{"path":745,"priority":462},"references/CONFIGURATION.md",{"path":747,"priority":462},"references/EXAMPLES.md",{"path":749,"priority":462},"references/MODEL_ARCHITECTURES.md",{"path":751,"priority":462},"references/MODEL_REGISTRY.md",{"path":753,"priority":462},"references/PIPELINE_OPTIONS.md",{"path":755,"priority":462},"references/TEXT_GENERATION.md",{"sources":757},[758],"manual",{"closedIssues90d":265,"description":760,"forks":266,"homepage":761,"license":271,"openIssues90d":267,"pushedAt":268,"readmeSize":263,"stars":269,"topics":762},"Give your agents the power of the Hugging Face ecosystem","https://huggingface.co",[],{"classifiedAt":764,"discoverAt":765,"extractAt":766,"githubAt":766,"updatedAt":764},1778690772996,1778689536128,1778690770714,[243,237,241,242,239,238,240],{"evaluatedAt":275,"extractAt":315,"updatedAt":275},[],[771,800,828,857,886,907],{"_creationTime":772,"_id":773,"community":774,"display":775,"identity":781,"providers":786,"relations":794,"tags":796,"workflow":797},1778691799740.4983,"k1797kqt7c6p7gytn30rckwzvh86nz20",{"reviewCount":8},{"description":776,"installMethods":777,"name":779,"sourceUrl":780},"This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.",{"claudeCode":778},"K-Dense-AI/claude-scientific-skills","Transformers","https://github.com/K-Dense-AI/claude-scientific-skills",{"basePath":782,"githubOwner":783,"githubRepo":784,"locale":18,"slug":785,"type":280},"scientific-skills/transformers","K-Dense-AI","claude-scientific-skills","transformers",{"evaluate":787,"extract":793},{"promptVersionExtension":230,"promptVersionScoring":231,"score":296,"tags":788,"targetMarket":244,"tier":245},[789,237,790,791,792,241,785,242],"nlp","audio","multimodal","machine-learning",{"commitSha":306,"license":271},{"repoId":795},"kd79rphh5gexy91xmpxc05h5mh86mm9r",[790,237,241,242,792,791,789,785],{"evaluatedAt":798,"extractAt":799,"updatedAt":798},1778694649795,1778691799740,{"_creationTime":801,"_id":802,"community":803,"display":804,"identity":810,"providers":815,"relations":821,"tags":824,"workflow":825},1778695116697.195,"k17b19qdkvp8hejjtcea8mhcqh86nkjy",{"reviewCount":8},{"description":805,"installMethods":806,"name":808,"sourceUrl":809},"Foundation model for image segmentation with zero-shot transfer. Use when you need to segment any object in images using points, boxes, or masks as prompts, or automatically generate all object masks in an image.",{"claudeCode":807},"Orchestra-Research/AI-Research-SKILLs","segment-anything-model","https://github.com/Orchestra-Research/AI-Research-SKILLs",{"basePath":811,"githubOwner":812,"githubRepo":813,"locale":18,"slug":814,"type":280},"18-multimodal/segment-anything","Orchestra-Research","AI-Research-SKILLs","segment-anything",{"evaluate":816,"extract":820},{"promptVersionExtension":230,"promptVersionScoring":231,"score":234,"tags":817,"targetMarket":244,"tier":245},[818,237,819,791,241],"image-segmentation","zero-shot-learning",{"commitSha":306},{"parentExtensionId":822,"repoId":823},"k17155ws9qc0hw7a568bg79sfd86max8","kd70hj1y80mhra5xm5g188j5n586mg18",[237,241,818,791,819],{"evaluatedAt":826,"extractAt":827,"updatedAt":826},1778697221812,1778695116697,{"_creationTime":829,"_id":830,"community":831,"display":832,"identity":838,"providers":843,"relations":851,"tags":853,"workflow":854},1778685991755.708,"k17eak6qjys6kns9c25d40q9kn86n7g2",{"reviewCount":8},{"description":833,"installMethods":834,"name":836,"sourceUrl":837},"Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.",{"claudeCode":835},"davila7/claude-code-templates","huggingface-accelerate","https://github.com/davila7/claude-code-templates",{"basePath":839,"githubOwner":840,"githubRepo":841,"locale":18,"slug":842,"type":280},"cli-tool/components/skills/ai-research/distributed-training-accelerate","davila7","claude-code-templates","distributed-training-accelerate",{"evaluate":844,"extract":850},{"promptVersionExtension":230,"promptVersionScoring":231,"score":234,"tags":845,"targetMarket":244,"tier":245},[846,847,241,848,242,849],"distributed-training","pytorch","mlops","performance",{"commitSha":306},{"repoId":852},"kd71fzn4s7r0269fkw47wt670n86ndz0",[241,846,242,848,849,847],{"evaluatedAt":855,"extractAt":856,"updatedAt":855},1778686851352,1778685991755,{"_creationTime":858,"_id":859,"community":860,"display":861,"identity":867,"providers":871,"relations":879,"tags":882,"workflow":883},1778675056600.2493,"k17548yc3rkdnt788dxzjc1jyn86nq83",{"reviewCount":8},{"description":862,"installMethods":863,"name":865,"sourceUrl":866},"Computer vision engineering skill for object detection, image segmentation, and visual AI systems. Covers CNN and Vision Transformer architectures, YOLO/Faster R-CNN/DETR detection, Mask R-CNN/SAM segmentation, and production deployment with ONNX/TensorRT. Includes PyTorch, torchvision, Ultralytics, Detectron2, and MMDetection frameworks. Use when building detection pipelines, training custom models, optimizing inference, or deploying vision systems.",{"claudeCode":864},"alirezarezvani/claude-skills","senior-computer-vision","https://github.com/alirezarezvani/claude-skills",{"basePath":868,"githubOwner":869,"githubRepo":870,"locale":18,"slug":865,"type":280},"engineering-team/skills/senior-computer-vision","alirezarezvani","claude-skills",{"evaluate":872,"extract":878},{"promptVersionExtension":230,"promptVersionScoring":231,"score":296,"tags":873,"targetMarket":244,"tier":245},[237,238,818,847,874,875,876,877],"onnx","tensorrt","detectron2","mmdetection",{"commitSha":306},{"parentExtensionId":880,"repoId":881},"k179s2ynpr6g927zdzf23zrhad86net8","kd7ff9s1w43mfyy1n7hf87816186m6px",[237,876,818,877,238,874,847,875],{"evaluatedAt":884,"extractAt":885,"updatedAt":884},1778683410057,1778675056600,{"_creationTime":887,"_id":888,"community":889,"display":890,"identity":894,"providers":897,"relations":903,"tags":904,"workflow":905},1778691799740.4905,"k17c27dcgjsqmxeggb19stv4xn86mf1z",{"reviewCount":8},{"description":891,"installMethods":892,"name":893,"sourceUrl":780},"Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.",{"claudeCode":778},"PyTorch Lightning",{"basePath":895,"githubOwner":783,"githubRepo":784,"locale":18,"slug":896,"type":280},"scientific-skills/pytorch-lightning","pytorch-lightning",{"evaluate":898,"extract":902},{"promptVersionExtension":230,"promptVersionScoring":231,"score":899,"tags":900,"targetMarket":244,"tier":245},100,[847,241,792,304,901],"framework",{"commitSha":306,"license":271},{"repoId":795},[241,901,792,304,847],{"evaluatedAt":906,"extractAt":799,"updatedAt":906},1778693958717,{"_creationTime":908,"_id":909,"community":910,"display":911,"identity":913,"providers":914,"relations":920,"tags":921,"workflow":922},1778690773482.4866,"k17a3mmgvm5hj49twj487hp64186n2qa",{"reviewCount":8},{"description":504,"installMethods":912,"name":393,"sourceUrl":14},{"claudeCode":12},{"basePath":391,"githubOwner":242,"githubRepo":278,"locale":18,"slug":393,"type":280},{"evaluate":915,"extract":919},{"promptVersionExtension":230,"promptVersionScoring":231,"score":899,"tags":916,"targetMarket":244,"tier":245},[303,242,848,917,918],"data-management","model-management",{"commitSha":306},{"parentExtensionId":283,"repoId":311},[303,917,242,848,918],{"evaluatedAt":923,"extractAt":315,"updatedAt":923},1778691223210]