Skip to main content

Huggingface Community Evals

Skill Verified Active

Run evaluations for Hugging Face Hub models using inspect-ai and lighteval on local hardware. Use for backend selection, local GPU evals, and choosing between vLLM / Transformers / accelerate. Not for HF Jobs orchestration, model-card PRs, .eval_results publication, or community-evals automation.

Purpose

To enable users to run local, GPU-accelerated evaluations of Hugging Face Hub models for backend selection and performance comparisons.

Features

  • Run `inspect-ai` evaluations locally
  • Run `lighteval` evaluations locally
  • Support for `vLLM`, Transformers, and `accelerate` backends
  • Configuration for model selection and task execution
  • Guidance on local GPU requirements and troubleshooting

Use Cases

  • Selecting the best inference backend (vLLM, Transformers, accelerate) for local model evaluations.
  • Performing smoke tests and full evaluations on Hugging Face Hub models using local hardware.
  • Comparing model performance across different tasks and backends without relying on cloud services.
  • Troubleshooting local GPU evaluation setup issues.

Non-Goals

  • Orchestrating Hugging Face Jobs for remote evaluations.
  • Editing model cards, `model-index` files, or publishing `.eval_results`.
  • Automating community evaluations or PRs.
  • Directly managing Hugging Face Hub resources beyond model evaluation.

Installation

/plugin install skills@huggingface-skills

Quality Score

Verified
98 /100
Analyzed about 16 hours ago

Trust Signals

Last commit2 days ago
Stars10.5k
LicenseApache-2.0
Status
View Source

Similar Extensions

Context Compression

100

This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.

Skill
muratcankoylan

Hf Cli

100

Hugging Face Hub CLI (`hf`) for downloading, uploading, and managing models, datasets, spaces, buckets, repos, papers, jobs, and more on the Hugging Face Hub. Use when: handling authentication; managing local cache; managing Hugging Face Buckets; running or scheduling jobs on Hugging Face infrastructure; managing Hugging Face repos; discussions and pull requests; browsing models, datasets and spaces; reading, searching, or browsing academic papers; managing collections; querying datasets; configuring spaces; setting up webhooks; or deploying and managing HF Inference Endpoints. Make sure to use this skill whenever the user mentions 'hf', 'huggingface', 'Hugging Face', 'huggingface-cli', or 'hugging face cli', or wants to do anything related to the Hugging Face ecosystem and to AI and ML in general. Also use for cloud storage needs like training checkpoints, data pipelines, or agent traces. Use even if the user doesn't explicitly ask for a CLI command. Replaces the deprecated `huggingface-cli`.

Skill
huggingface

Chat Format

100

Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval

Skill
ruvnet

Oh My Claudecode

100

Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly

Skill
Yeachan-Heo

Wrap Up Ritual

100

End-of-session ritual that audits changes, runs quality checks, captures learnings, and produces a session summary. Use when saying "wrap up", "done for the day", "finish coding", or ending a coding session.

Skill
rohitg00

Project Development

100

This skill should be used when the user asks to "start an LLM project", "design batch pipeline", "evaluate task-model fit", "structure agent project", or mentions pipeline architecture, agent-assisted development, cost estimation, or choosing between LLM and traditional approaches.

Skill
muratcankoylan

© 2025 SkillRepo · Find the right skill, skip the noise.