Skip to main content

LlamaGuard

Skill Active

Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.

Purpose

To provide a robust, pre-trained AI model for filtering harmful or inappropriate content in LLM inputs and outputs, ensuring safer AI interactions.

Features

  • Specialized moderation model (Meta's LlamaGuard 7-8B)
  • 6 detailed safety categories (violence, sexual, weapons, substances, self-harm, criminal planning)
  • High accuracy (94-95%)
  • Multiple deployment options (vLLM, HuggingFace, Sagemaker)
  • Integration with NeMo Guardrails

Use Cases

  • Moderating user prompts before sending to an LLM
  • Filtering LLM responses before displaying them to users
  • Implementing content safety guardrails in production AI applications
  • Detecting and classifying various types of harmful content

Non-Goals

  • Performing general text generation or summarization
  • Acting as a general-purpose chatbot
  • Replacing the need for LLM alignment training itself

Workflow

  1. Install necessary Python libraries (transformers, torch).
  2. Log in to HuggingFace CLI.
  3. Load the LlamaGuard model and tokenizer.
  4. Prepare chat input using the tokenizer's template.
  5. Generate moderation output from the model.
  6. Parse the output to determine safety status and category.
  7. Block or allow content based on the moderation result.

Prerequisites

  • Python 3.7+
  • transformers library
  • torch library
  • HuggingFace CLI login with token
  • GPU resources (recommended for performance)

Trust

  • warning:Issues Attention17 issues opened, 4 closed in the last 90 days, indicating a low closure rate and potentially slow maintainer response.

Compliance

  • info:GDPRThe skill moderates content but does not inherently process personal data. However, the LLM itself might process PII if present in the input, and this is not explicitly sanitized.

Execution

  • warning:Pinned dependenciesDependencies are listed but not explicitly pinned with versions, and there's no lockfile mentioned for the Python environment, posing a risk for reproducibility and stability.

Installation

npx skills add davila7/claude-code-templates

Runs the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.

Quality Score

75 /100
Analyzed about 19 hours ago

Trust Signals

Last commitabout 21 hours ago
Stars27.2k
LicenseMIT
Status
View Source

Similar Extensions

Llamaguard

95

Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.

Skill
Orchestra-Research

Constitutional Ai

98

Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.

Skill
Orchestra-Research

NeMo Guardrails

97

NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation, fact-checking, hallucination detection, PII filtering, toxicity detection. Uses Colang 2.0 DSL for programmable rails. Production-ready, runs on T4 GPU.

Skill
Orchestra-Research

Constitutional Ai

95

Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.

Skill
davila7

Fixflow

100

Execute coding tasks with a strict delivery workflow: build a full plan, implement one step at a time, run tests continuously, and commit by default after each step (`per_step`). Support explicit commit policy overrides (`final_only`, `milestone`) and optional BDD (Given/When/Then) when users ask for behavior-driven delivery or requirements are unclear.

Skill
majiayu000

Safe Mode

100

Prevent destructive operations using Claude Code hooks. Three modes — cautious (warn on dangerous commands), lockdown (restrict edits to one directory), and clear (remove restrictions). Uses PreToolUse matchers for Bash, Edit, and Write.

Skill
rohitg00

© 2025 SkillRepo · Find the right skill, skip the noise.