Review Skill
Skill Verified ActiveReview a proposed Agent Skill for structural validity and content quality before publishing. Runs the skill-validator CLI to check for structural issues, scores the skill with an LLM judge, and interprets results to advise SMEs on what to address. Use when a user wants to review, validate, or quality-check an Agent Skill.
To ensure Agent Skills meet structural and content quality standards before publishing, by providing automated validation and expert-like review.
Features
- Structural validation using skill-validator CLI
- Optional LLM scoring for content quality
- Interpretation of validation and scoring results
- Step-by-step workflow guidance
- Clear prerequisite checks and installation instructions
Use Cases
- Reviewing a new agent skill for structural correctness
- Assessing the content quality of an agent skill with LLM scoring
- Getting actionable feedback on areas for improvement in an agent skill
- Validating an agent skill before it is published to a catalog
Non-Goals
- Executing the agent skill itself
- Modifying the agent skill code directly
- Providing a general-purpose code linter or static analysis tool
Documentation
- info:Configuration & parameter referenceThe SKILL.md mentions environment variables for LLM scoring and configuration state but does not explicitly document their precedence or default values.
Code Execution
- info:ValidationThe `skill-validator` CLI likely performs internal validation, but the skill itself doesn't explicitly demonstrate schema validation for its own inputs or outputs.
Installation
npx skills add mongodb/agent-skillsRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
VerifiedTrust Signals
Similar Extensions
Claude Handoff
100Run /handoff to capture session data, then write a phased implementation plan that references it. Creates beads for tracking.
Unslop Review
100Rewrites code review comments so they read like a human teammate wrote them. Cuts corporate-AI throat-clearing ("I noticed...", "I was wondering if perhaps...", "It might be worth considering..."). Each comment is direct: location, the issue, a concrete fix. Use when user says "humanize review", "de-slop PR comment", "make this feedback sound human", "review this PR", "code review", "/review", "/unslop-review". Auto-triggers when reviewing PRs.
Review Pull Request
100Review a pull request end-to-end using GitHub CLI. Covers diff analysis, commit history review, CI/CD check verification, severity-leveled feedback (blocking/suggestion/nit/praise), and gh pr review submission. Use when a pull request is assigned for review, performing a self-review before requesting others' input, conducting a second review after feedback is addressed, or auditing a merged PR for post-merge quality assessment.
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly
ClawSweeper Skill
100Use for all ClawSweeper work: OpenClaw issue/PR sweep reports, commit-review reports, repair jobs, cloud fix PRs, @clawsweeper maintainer mention commands, trusted ClawSweeper-reviewed autofix/automerge, GitHub Actions monitoring, permissions, gates, and manual backfills.
Codex PR Review
100Revisa pull requests en proyectos Drupal 11 (u otro) siguiendo la metodología Codex (lógica de negocio, edge cases de hooks/queries, seguridad, performance, completitud). Genera un informe .md en la carpeta del IDE detectado (.antigravity/, .cursor/, .vscode/ o docs/) con hallazgos por severidad y soluciones accionables. Usar cuando el usuario pida "revisión Codex", "revisión de PR", "revisar PR", "revisar PR