Constitutional Ai
Skill Verified ActiveAnthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.
To enable the training of harmless AI models through AI-generated feedback and self-critique, reducing the need for human-labeled data and improving AI safety alignment.
Features
- Implements Constitutional AI for AI safety training
- Details two-phase approach: Supervised Learning (SL) and RLAIF
- Provides Python code examples for self-critique, revision, and preference evaluation
- Addresses common issues and offers recovery strategies
Use Cases
- Safety alignment of LLMs without human labels
- Reducing harmful or toxic outputs from AI models
- Implementing explainable AI decisions through principles
- Scalable AI safety training using AI feedback
Non-Goals
- Direct human preference data collection (RLHF)
- Runtime content filtering (NeMo Guardrails)
- Pre-trained moderation models (LlamaGuard)
Installation
First, add the marketplace
/plugin marketplace add Orchestra-Research/AI-Research-SKILLs/plugin install AI-Research-SKILLs@ai-research-skillsQuality Score
VerifiedTrust Signals
Similar Extensions
Constitutional Ai
95Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.
Product Self Knowledge
100Stop and consult this skill whenever your response would include specific facts about Anthropic's products. Covers: Claude Code (how to install, Node.js requirements, platform/OS support, MCP server integration, configuration), Claude API (function calling/tool use, batch processing, SDK usage, rate limits, pricing, models, streaming), and Claude.ai (Pro vs Team vs Enterprise plans, feature limits). Trigger this even for coding tasks that use the Anthropic SDK, content creation mentioning Claude capabilities or pricing, or LLM provider comparisons. Any time you would otherwise rely on memory for Anthropic product details, verify here instead — your training data may be outdated or wrong.
Anthropic Expert
98Expert on Anthropic Claude API, models, prompt engineering, function calling, vision, and best practices. Triggers on anthropic, claude, api, prompt, function calling, vision, messages api, embeddings
Llamaguard
95Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.
Anthropic Sdk
85Official Anthropic SDK for Claude AI with chat, streaming, function calling, and vision capabilities
LlamaGuard
75Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.