SADD Plugin
插件 警告 活跃Introduces skills for subagent-driven development, dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates.
To enable complex, high-quality development tasks by distributing work across specialized sub-agents with built-in quality gates and iterative refinement.
功能
- Subagent-driven development with context isolation
- Sequential and parallel task execution
- Competitive generation with multi-judge evaluation
- Tree of Thoughts for complex reasoning tasks
- Meta-judge for generating tailored evaluation criteria
- Retry mechanisms for robust task completion
- Adaptive strategy selection (polish, redesign, synthesize)
使用场景
- Decomposing and executing complex, multi-step development plans
- Parallelizing independent tasks for faster iteration
- Evaluating and refining solutions through competitive multi-agent debate
- Leveraging specialized agents for domain-specific tasks
非目标
- Directly performing implementation tasks (delegates to sub-agents)
- Managing persistent state outside of the orchestration flow
- Acting as a standalone AI without agent orchestration
实践
- Agent Orchestration
- Context Isolation
- Meta-Evaluation
- Iterative Refinement
- Adaptive Strategy Selection
License
- critical:License usabilityThe plugin is licensed under GPL-3.0, which imposes strong copyleft obligations, potentially restricting commercial use or integration without adhering to its terms.
安装
请先添加 Marketplace
/plugin marketplace add NeoLabHQ/context-engineering-kit/plugin install sadd@context-engineering-kit包含 10 个扩展
Skill (10)
Execute a task with sub-agent implementation and LLM-as-a-judge verification with automatic retry loop
Execute tasks through competitive multi-agent generation, meta-judge evaluation specification, multi-judge evaluation, and evidence-based synthesis
Launch multiple sub-agents in parallel to execute tasks across files or targets with intelligent model selection, quality-focused prompting, and meta-judge → LLM-as-a-judge verification
Execute complex tasks through sequential sub-agent orchestration with intelligent model selection, meta-judge → LLM-as-a-judge verification
Launch a meta-judge then a judge sub-agent to evaluate results produced in the current conversation
Evaluate solutions through multi-round debate between independent judges until consensus
Launch an intelligent sub-agent with automatic model selection based on task complexity, specialized agent matching, Zero-shot CoT reasoning, and mandatory self-critique verification
Design multi-agent architectures for complex tasks. Use when single-agent context limits are exceeded, when tasks decompose naturally into subtasks, or when specializing agents improves quality.
Use when executing implementation plans with independent tasks in the current session or facing 3+ independent issues that can be investigated without shared state or dependencies - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates
Execute tasks through systematic exploration, pruning, and expansion using Tree of Thoughts methodology with meta-judge evaluation specifications and multi-agent evaluation
质量评分
警告类似扩展
Ag2 Agent Builder
100Build AG2 (AutoGen) multi-agent systems with slash commands: scaffold agents, wire workflows, create tools, and review code
Oh My Claudecode
99Multi-agent orchestration system for Claude Code
Agent Triforce
99Multi-agent development system with three specialized agents (PM, Dev, QA) coordinated through checklist methodology based on The Checklist Manifesto and Boeing's checklist engineering
Agenthub
99Multi-agent collaboration plugin for Claude Code. Spawn N parallel subagents that compete on code optimization, content drafts, research approaches, or any problem that benefits from diverse solutions. Evaluate by metric or LLM judge, merge the winner. 7 slash commands, agent templates, git DAG orchestration, message board coordination.
Multi Platform Apps
99Cross-platform application development coordinating web, iOS, Android, and desktop implementations
Plugin Eval
98Three-layer quality evaluation framework for Claude Code plugins with Elo ranking