Skip to main content

Llamaguard

Skill Verified Active

Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.

Purpose

To provide a specialized, high-accuracy moderation model for LLM inputs and outputs, ensuring content safety and adherence to ethical guidelines.

Features

  • 7-8B parameter moderation model
  • Classifies 6 safety categories (violence, sexual, weapons, substances, self-harm, criminal planning)
  • High accuracy (94-95%)
  • Deployment options: vLLM, HuggingFace, Sagemaker
  • Integration with NeMo Guardrails

Use Cases

  • Moderating user prompts before sending to an LLM
  • Filtering LLM responses to prevent harmful content generation
  • Implementing content safety guardrails in production LLM applications
  • Integrating with frameworks like NeMo Guardrails for comprehensive safety

Non-Goals

  • Replacing the core LLM's generation capabilities
  • Providing general-purpose natural language understanding beyond safety classification
  • Real-time moderation on low-resource devices without GPU acceleration

Documentation

  • info:Configuration & parameter referenceWhile installation and basic usage are detailed, specific parameters for the `moderate` function or advanced configuration options for vLLM deployment lack explicit documentation, including defaults.

Code Execution

  • info:ValidationInput validation is implied through Pydantic models in the FastAPI example, but the core Python usage in SKILL.md lacks explicit schema validation for inputs like chat history.

Compliance

  • info:GDPRThe skill processes user messages for safety, which may contain personal data. While it doesn't submit this data to third parties, it doesn't explicitly sanitize personal data before analysis.

Errors

  • info:Actionable error messagesError messages like 'unsafe\nS6' are informative about the failure and category, but lack specific remediation steps for the user.

Execution

  • info:Pinned dependenciesDependencies are listed, and a lockfile is present, but specific pinned versions for Python libraries are not explicitly stated in the SKILL.md.

Practical Utility

  • info:Edge casesThe SKILL.md mentions potential issues like 'Model access denied' and 'High latency' but doesn't detail specific failure modes or recovery steps for the core moderation functions themselves.

Installation

First, add the marketplace

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

Quality Score

Verified
95 /100
Analyzed about 17 hours ago

Trust Signals

Last commit16 days ago
Stars8.3k
LicenseMIT
Status
View Source

Similar Extensions

LlamaGuard

75

Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.

Skill
davila7

Constitutional Ai

98

Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.

Skill
Orchestra-Research

NeMo Guardrails

97

NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation, fact-checking, hallucination detection, PII filtering, toxicity detection. Uses Colang 2.0 DSL for programmable rails. Production-ready, runs on T4 GPU.

Skill
Orchestra-Research

Constitutional Ai

95

Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.

Skill
davila7

Fixflow

100

Execute coding tasks with a strict delivery workflow: build a full plan, implement one step at a time, run tests continuously, and commit by default after each step (`per_step`). Support explicit commit policy overrides (`final_only`, `milestone`) and optional BDD (Given/When/Then) when users ask for behavior-driven delivery or requirements are unclear.

Skill
majiayu000

Safe Mode

100

Prevent destructive operations using Claude Code hooks. Three modes — cautious (warn on dangerous commands), lockdown (restrict edits to one directory), and clear (remove restrictions). Uses PreToolUse matchers for Bash, Edit, and Write.

Skill
rohitg00

© 2025 SkillRepo · Find the right skill, skip the noise.