LlamaGuard
Skill ActiveMeta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.
To provide a robust, pre-trained AI model for filtering harmful or inappropriate content in LLM inputs and outputs, ensuring safer AI interactions.
Features
- Specialized moderation model (Meta's LlamaGuard 7-8B)
- 6 detailed safety categories (violence, sexual, weapons, substances, self-harm, criminal planning)
- High accuracy (94-95%)
- Multiple deployment options (vLLM, HuggingFace, Sagemaker)
- Integration with NeMo Guardrails
Use Cases
- Moderating user prompts before sending to an LLM
- Filtering LLM responses before displaying them to users
- Implementing content safety guardrails in production AI applications
- Detecting and classifying various types of harmful content
Non-Goals
- Performing general text generation or summarization
- Acting as a general-purpose chatbot
- Replacing the need for LLM alignment training itself
Workflow
- Install necessary Python libraries (transformers, torch).
- Log in to HuggingFace CLI.
- Load the LlamaGuard model and tokenizer.
- Prepare chat input using the tokenizer's template.
- Generate moderation output from the model.
- Parse the output to determine safety status and category.
- Block or allow content based on the moderation result.
Prerequisites
- Python 3.7+
- transformers library
- torch library
- HuggingFace CLI login with token
- GPU resources (recommended for performance)
Trust
- warning:Issues Attention17 issues opened, 4 closed in the last 90 days, indicating a low closure rate and potentially slow maintainer response.
Compliance
- info:GDPRThe skill moderates content but does not inherently process personal data. However, the LLM itself might process PII if present in the input, and this is not explicitly sanitized.
Execution
- warning:Pinned dependenciesDependencies are listed but not explicitly pinned with versions, and there's no lockfile mentioned for the Python environment, posing a risk for reproducibility and stability.
Installation
npx skills add davila7/claude-code-templatesRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
Trust Signals
Similar Extensions
Llamaguard
95Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.
Constitutional Ai
98Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.
NeMo Guardrails
97NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation, fact-checking, hallucination detection, PII filtering, toxicity detection. Uses Colang 2.0 DSL for programmable rails. Production-ready, runs on T4 GPU.
Constitutional Ai
95Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.
Fixflow
100Execute coding tasks with a strict delivery workflow: build a full plan, implement one step at a time, run tests continuously, and commit by default after each step (`per_step`). Support explicit commit policy overrides (`final_only`, `milestone`) and optional BDD (Given/When/Then) when users ask for behavior-driven delivery or requirements are unclear.
Safe Mode
100Prevent destructive operations using Claude Code hooks. Three modes — cautious (warn on dangerous commands), lockdown (restrict edits to one directory), and clear (remove restrictions). Uses PreToolUse matchers for Bash, Edit, and Write.