跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

NeMo Guardrails

技能 已验证 活跃

NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation, fact-checking, hallucination detection, PII filtering, toxicity detection. Uses Colang 2.0 DSL for programmable rails. Production-ready, runs on T4 GPU.

目的

To provide a programmable, production-ready runtime safety framework for LLM applications, ensuring security, accuracy, and ethical compliance.

功能

  • Programmable runtime safety rails with Colang 2.0 DSL
  • Jailbreak detection and prompt injection prevention
  • Input and output validation for LLM interactions
  • Fact-checking and hallucination detection
  • PII filtering and toxicity detection
  • Integration with external moderation tools (Presidio, LlamaGuard)

使用场景

  • Implementing robust safety mechanisms for production LLM applications
  • Preventing prompt injection attacks and jailbreaking attempts
  • Validating LLM inputs and outputs for accuracy and safety
  • Filtering sensitive personal information (PII) from LLM interactions
  • Ensuring LLM responses are factual and non-toxic

非目标

  • Replacing LLM training-time safety mechanisms
  • Providing a general-purpose LLM prompt engineering tool
  • Acting as a data pipeline or ETL tool outside of LLM interaction safety

工作流

  1. Define safety rules and flows using Colang 2.0 DSL.
  2. Configure LLM parameters and integrate custom actions or external models.
  3. Instantiate LLMRails with the defined configuration.
  4. Generate LLM responses through the configured rails.
  5. Handle potential safety violations by blocking or refining output.

实践

  • Runtime Safety
  • Input Validation
  • Output Validation
  • PII Filtering
  • Toxicity Detection

先决条件

  • Python 3.8+
  • nemoguardrails library

Scope

  • info:Tool surface sizeThe SKILL.md defines a framework with flexible Colang flows and custom actions, rather than a fixed set of tools, making direct tool count difficult. The examples showcase a few core concepts.

安装

npx skills add davila7/claude-code-templates

通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。

质量评分

已验证
98 /100
1 day ago 分析

信任信号

最近提交1 day ago
星标27.2k
许可证MIT
状态
查看源代码

类似扩展

NeMo Guardrails

97

NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation, fact-checking, hallucination detection, PII filtering, toxicity detection. Uses Colang 2.0 DSL for programmable rails. Production-ready, runs on T4 GPU.

技能
Orchestra-Research

Safe Mode

100

Prevent destructive operations using Claude Code hooks. Three modes — cautious (warn on dangerous commands), lockdown (restrict edits to one directory), and clear (remove restrictions). Uses PreToolUse matchers for Bash, Edit, and Write.

技能
rohitg00

Prompt Guard

100

Meta's 86M prompt injection and jailbreak detector. Filters malicious prompts and third-party data for LLM apps. 99%+ TPR, <1% FPR. Fast (<2ms GPU). Multilingual (8 languages). Deploy with HuggingFace or batch processing for RAG security.

技能
Orchestra-Research

LLM Gate

98

LLM-powered quality verification using prompt hooks. Validates commit messages, code patterns, and conventions using AI before allowing operations. Use to set up intelligent guardrails.

技能
rohitg00

Llamaguard

95

Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.

技能
Orchestra-Research

Careful

95

Safety guardrails for destructive commands. Warns before rm -rf, DROP TABLE, force-push, git reset --hard, kubectl delete, and similar destructive operations. User can override each warning. Use when touching prod, debugging live systems, or working in a shared environment. Use when asked to "be careful", "safety mode", "prod mode", or "careful mode". (gstack)

技能
garrytan