Skip to main content

Do Competitively

Skill Warning Active
Part of:SADD Plugin

Execute tasks through competitive multi-agent generation, meta-judge evaluation specification, multi-judge evaluation, and evidence-based synthesis

Purpose

To execute complex tasks by leveraging the strengths of multiple AI agents, ensuring high-quality, evidence-based results through a structured competitive evaluation process.

Features

  • Competitive generation with self-critique
  • Meta-judge evaluation specification
  • Multi-judge evaluation with verification loops
  • Adaptive strategy selection (polish, synthesize, redesign)
  • Evidence-based synthesis of results

Use Cases

  • Designing complex APIs or systems where multiple approaches need evaluation.
  • Generating and refining code for high-stakes projects requiring maximum quality.
  • Producing comprehensive documentation or technical specifications through comparative agent output.
  • Tasks where consensus among multiple AI perspectives is crucial for optimal outcome.

Non-Goals

  • Executing trivial tasks where the overhead of multi-agent competition is not justified.
  • Simple code generation without comparative evaluation or synthesis.
  • Replacing human oversight entirely; the framework aims to augment decision-making.

Workflow

  1. Create reports directory.
  2. Phase 1: Launch 3 generator agents and 1 meta-judge agent in parallel.
  3. Meta-judge generates evaluation specification YAML.
  4. Generators produce independent solutions.
  5. Phase 2: Launch 3 judges in parallel, evaluating solutions against meta-judge spec.
  6. Judges produce comparative reports and votes.
  7. Phase 2.5: Orchestrator analyzes judge outputs to select strategy (SELECT_AND_POLISH, REDESIGN, FULL_SYNTHESIS).
  8. Phase 3: Based on strategy, polish a winner, redesign fundamentally flawed solutions, or synthesize the best elements from multiple solutions.
  9. Output final solution and reports.

Documentation

  • info:Configuration & parameter referenceThe SKILL.md provides a detailed workflow and prompt templates, but does not explicitly document all parameters or their default values for the `/do-competitively` command, nor does it detail precedence order for any configuration files.

License

  • critical:License usabilityThe extension is licensed under GPL-3.0, which is a strong copyleft license and may not be compatible with all use cases or downstream redistribution plans.

Versioning

  • warning:Release ManagementNo explicit versioning (e.g., semver in package.json or SKILL.md frontmatter) is detected, and installation instructions reference 'NeoLabHQ/context-engineering-kit' without a specific tag, which could lead to unexpected updates.

Installation

First, add the marketplace

/plugin marketplace add NeoLabHQ/context-engineering-kit
/plugin install sadd@context-engineering-kit

Quality Score

Warning
75 /100
Analyzed about 22 hours ago

Trust Signals

Last commit9 days ago
Stars993
LicenseGPL-3.0-only
Status
View Source

Similar Extensions

Ccg

99

Claude-Codex-Gemini tri-model orchestration via /ask codex + /ask gemini, then Claude synthesizes results

Skill
Yeachan-Heo

Evaluation

98

This skill should be used when the user asks to "evaluate agent performance", "build test framework", "measure agent quality", "create evaluation rubrics", or mentions LLM-as-judge, multi-dimensional evaluation, agent testing, or quality gates for agent pipelines.

Skill
muratcankoylan

Cli Creator

100

Build a composable CLI for Codex from API docs, an OpenAPI spec, existing curl examples, an SDK, a web app, an admin tool, or a local script. Use when the user wants Codex to create a command-line tool that can run from any repo, expose composable read/write commands, return stable JSON, manage auth, and pair with a companion skill.

Skill
openai

Context Mode Ops

100

Manage context-mode GitHub issues, PRs, releases, and marketing with parallel subagent army. Orchestrates 10-20 dynamic agents per task. Use when triaging issues, reviewing PRs, releasing versions, writing LinkedIn posts, announcing releases, fixing bugs, merging contributions, validating ENV vars, testing adapters, or syncing branches.

Skill
mksglu

Fixflow

100

Execute coding tasks with a strict delivery workflow: build a full plan, implement one step at a time, run tests continuously, and commit by default after each step (`per_step`). Support explicit commit policy overrides (`final_only`, `milestone`) and optional BDD (Given/When/Then) when users ask for behavior-driven delivery or requirements are unclear.

Skill
majiayu000

Kotlin Mcp Server Generator

100

Generate a complete Kotlin MCP server project with proper structure, dependencies, and implementation using the official io.modelcontextprotocol:kotlin-sdk library.

Skill
github

© 2025 SkillRepo · Find the right skill, skip the noise.