Do And Judge
Skill Verified ActiveExecute a task with sub-agent implementation and LLM-as-a-judge verification with automatic retry loop
To execute tasks reliably by leveraging specialized sub-agents for implementation and verification, ensuring high-quality outcomes through a structured, iterative process.
Features
- Sub-agent implementation of tasks
- LLM-as-a-judge verification
- Meta-judge for evaluation specification generation
- Automatic retry loop with feedback
- Structured process for task analysis and model selection
Use Cases
- Executing complex code generation tasks with quality gates
- Automating documentation updates with verification
- Refactoring code with independent review and feedback
- Ensuring task completion meets predefined quality criteria
Non-Goals
- Performing the task directly as the orchestrator
- Skipping judge verification to save time
- Writing code or making changes to source files directly
- Proceeding after max retries without user decision
Installation
First, add the marketplace
/plugin marketplace add NeoLabHQ/context-engineering-kit/plugin install sadd@context-engineering-kitQuality Score
VerifiedTrust Signals
Similar Extensions
Agent Worker Specialist
100Agent skill for worker-specialist - invoke with $agent-worker-specialist
Orchestrate
100Wire Commands, Agents, and Skills together for complex features. Use when building features that need research, planning, and implementation phases.
Autopilot Loop
99Run an autonomous /loop iteration -- check progress, work on next task, schedule next wake
Team Skill
99N coordinated agents on shared task list using Claude Code native teams
Learner Skill
99Extract a learned skill from the current conversation
Ralplan
99Consensus planning entrypoint that auto-gates vague ralph/autopilot/team requests before execution