Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

SADD Plugin

Plugin Warnung Aktiv

Introduces skills for subagent-driven development, dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates.

10 Skills 0 MCPs
Zweck

To enable complex, high-quality development tasks by distributing work across specialized sub-agents with built-in quality gates and iterative refinement.

Funktionen

  • Subagent-driven development with context isolation
  • Sequential and parallel task execution
  • Competitive generation with multi-judge evaluation
  • Tree of Thoughts for complex reasoning tasks
  • Meta-judge for generating tailored evaluation criteria
  • Retry mechanisms for robust task completion
  • Adaptive strategy selection (polish, redesign, synthesize)

Anwendungsfälle

  • Decomposing and executing complex, multi-step development plans
  • Parallelizing independent tasks for faster iteration
  • Evaluating and refining solutions through competitive multi-agent debate
  • Leveraging specialized agents for domain-specific tasks

Nicht-Ziele

  • Directly performing implementation tasks (delegates to sub-agents)
  • Managing persistent state outside of the orchestration flow
  • Acting as a standalone AI without agent orchestration

Praktiken

  • Agent Orchestration
  • Context Isolation
  • Meta-Evaluation
  • Iterative Refinement
  • Adaptive Strategy Selection

License

  • critical:License usabilityThe plugin is licensed under GPL-3.0, which imposes strong copyleft obligations, potentially restricting commercial use or integration without adhering to its terms.

Installation

Zuerst Marketplace hinzufügen

/plugin marketplace add NeoLabHQ/context-engineering-kit
/plugin install sadd@context-engineering-kit

Enthält 10 Erweiterungen

Skill (10)

Do And Judge Skill

Execute a task with sub-agent implementation and LLM-as-a-judge verification with automatic retry loop

98
Do Competitively Skill

Execute tasks through competitive multi-agent generation, meta-judge evaluation specification, multi-judge evaluation, and evidence-based synthesis

75
Do In Parallel Skill

Launch multiple sub-agents in parallel to execute tasks across files or targets with intelligent model selection, quality-focused prompting, and meta-judge → LLM-as-a-judge verification

100
Do In Steps Skill

Execute complex tasks through sequential sub-agent orchestration with intelligent model selection, meta-judge → LLM-as-a-judge verification

97
Judge Skill

Launch a meta-judge then a judge sub-agent to evaluate results produced in the current conversation

95
Judge With Debate Skill

Evaluate solutions through multi-round debate between independent judges until consensus

75
Launch Sub Agent Skill

Launch an intelligent sub-agent with automatic model selection based on task complexity, specialized agent matching, Zero-shot CoT reasoning, and mandatory self-critique verification

99
Multi Agent Patterns Skill

Design multi-agent architectures for complex tasks. Use when single-agent context limits are exceeded, when tasks decompose naturally into subtasks, or when specializing agents improves quality.

99
Subagent Driven Development Skill

Use when executing implementation plans with independent tasks in the current session or facing 3+ independent issues that can be investigated without shared state or dependencies - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates

95
Tree Of Thoughts Skill

Execute tasks through systematic exploration, pruning, and expansion using Tree of Thoughts methodology with meta-judge evaluation specifications and multi-agent evaluation

95

Qualitätspunktzahl

Warnung
75 /100
Analysiert 1 day ago

Vertrauenssignale

Letzter Commit9 days ago
Sterne993
LizenzGPL-3.0-or-later
Status
Quellcode ansehen