Do Competitively
技能 警告 活跃Execute tasks through competitive multi-agent generation, meta-judge evaluation specification, multi-judge evaluation, and evidence-based synthesis
To execute complex tasks by leveraging the strengths of multiple AI agents, ensuring high-quality, evidence-based results through a structured competitive evaluation process.
功能
- Competitive generation with self-critique
- Meta-judge evaluation specification
- Multi-judge evaluation with verification loops
- Adaptive strategy selection (polish, synthesize, redesign)
- Evidence-based synthesis of results
使用场景
- Designing complex APIs or systems where multiple approaches need evaluation.
- Generating and refining code for high-stakes projects requiring maximum quality.
- Producing comprehensive documentation or technical specifications through comparative agent output.
- Tasks where consensus among multiple AI perspectives is crucial for optimal outcome.
非目标
- Executing trivial tasks where the overhead of multi-agent competition is not justified.
- Simple code generation without comparative evaluation or synthesis.
- Replacing human oversight entirely; the framework aims to augment decision-making.
工作流
- Create reports directory.
- Phase 1: Launch 3 generator agents and 1 meta-judge agent in parallel.
- Meta-judge generates evaluation specification YAML.
- Generators produce independent solutions.
- Phase 2: Launch 3 judges in parallel, evaluating solutions against meta-judge spec.
- Judges produce comparative reports and votes.
- Phase 2.5: Orchestrator analyzes judge outputs to select strategy (SELECT_AND_POLISH, REDESIGN, FULL_SYNTHESIS).
- Phase 3: Based on strategy, polish a winner, redesign fundamentally flawed solutions, or synthesize the best elements from multiple solutions.
- Output final solution and reports.
Documentation
- info:Configuration & parameter referenceThe SKILL.md provides a detailed workflow and prompt templates, but does not explicitly document all parameters or their default values for the `/do-competitively` command, nor does it detail precedence order for any configuration files.
License
- critical:License usabilityThe extension is licensed under GPL-3.0, which is a strong copyleft license and may not be compatible with all use cases or downstream redistribution plans.
Versioning
- warning:Release ManagementNo explicit versioning (e.g., semver in package.json or SKILL.md frontmatter) is detected, and installation instructions reference 'NeoLabHQ/context-engineering-kit' without a specific tag, which could lead to unexpected updates.
安装
请先添加 Marketplace
/plugin marketplace add NeoLabHQ/context-engineering-kit/plugin install sadd@context-engineering-kit质量评分
警告类似扩展
Ccg
99Claude-Codex-Gemini tri-model orchestration via /ask codex + /ask gemini, then Claude synthesizes results
Evaluation
98This skill should be used when the user asks to "evaluate agent performance", "build test framework", "measure agent quality", "create evaluation rubrics", or mentions LLM-as-judge, multi-dimensional evaluation, agent testing, or quality gates for agent pipelines.
Cli Creator
100Build a composable CLI for Codex from API docs, an OpenAPI spec, existing curl examples, an SDK, a web app, an admin tool, or a local script. Use when the user wants Codex to create a command-line tool that can run from any repo, expose composable read/write commands, return stable JSON, manage auth, and pair with a companion skill.
Context Mode Ops
100使用并行子代理军队管理 context-mode GitHub 问题、PR、发布和营销。为每个任务编排 10-20 个动态代理。在分类问题、审查 PR、发布版本、撰写 LinkedIn 帖子、宣布发布、修复错误、合并贡献、验证 ENV 变量、测试适配器或同步分支时使用。
Fixflow
100使用严格的交付工作流执行编码任务:构建完整计划、分步实现、持续运行测试,并默认在每一步 (`per_step`) 后提交。当用户要求行为驱动交付或需求不明确时,支持显式提交策略覆盖 (`final_only`, `milestone`) 和可选的 BDD(给定/当/则)。
Kotlin Mcp Server Generator
100Generate a complete Kotlin MCP server project with proper structure, dependencies, and implementation using the official io.modelcontextprotocol:kotlin-sdk library.