CE Optimize
Skill Verified ActiveRun metric-driven iterative optimization loops -- define a measurable goal, run parallel experiments, measure each against hard gates or LLM-as-judge scores, keep improvements, and converge on the best solution. Use when optimizing clustering quality, search relevance, build performance, prompt quality, or any measurable outcome that benefits from systematic experimentation.
To systematically improve measurable outcomes through automated, iterative experimentation and convergence.
Features
- Metric-driven iterative optimization loops
- Support for hard metrics and LLM-as-judge
- Automated experiment execution and measurement
- Robust persistence and crash recovery
- Scoped modification of code and configuration
Use Cases
- Optimizing build performance or test coverage
- Tuning LLM prompts for quality and cost
- Improving search relevance or clustering quality
- Systematically experimenting with code or configuration variants
Non-Goals
- Implementing the core logic being optimized
- Replacing manual code development entirely
- Running experiments without a defined measurement harness
- Performing optimizations that cannot be measured or evaluated systematically
Practices
- Experiment design
- Iterative development
- Metric definition
- Code quality
- MLOps
Prerequisites
- Git repository
- Bash shell
- Python 3
- The `ce-optimize` skill installed
Installation
First, add the marketplace
/plugin marketplace add EveryInc/compound-engineering-plugin/plugin install compound-engineering@compound-engineering-pluginQuality Score
VerifiedTrust Signals
Similar Extensions
Moyu (摸鱼)
100과잉 엔지니어링 패턴이 감지되면 자동으로 활성화됩니다: (1) 사용자가 명시적으로 변경을 요청하지 않은 코드나 파일을 수정하는 경우 (2) 요청되지 않은 새로운 추상화 레이어(class, interface, factory, wrapper)를 생성하는 경우 (3) 요청되지 않은 주석, 문서, JSDoc, 타입 어노테이션을 추가하는 경우 (4) 요청되지 않은 새로운 의존성을 도입하는 경우 (5) 최소한의 편집 대신 파일 전체를 다시 작성하는 경우 (6) diff 범위가 사용자의 요청을 명백히 초과하는 경우 (7) 사용자가 "너무 많아", "거기는 건드리지 마", "X만 변경해", "간단하게", "그만" 등의 신호를 보내는 경우 (8) 발생할 수 없는 시나리오에 대한 에러 처리, 유효성 검사, 방어적 코드를 추가하는 경우 (9) 요청되지 않은 테스트, 설정 스캐폴딩, 문서를 생성하는 경우
Arize Experiment
100Creates, runs, and analyzes Arize experiments for evaluating and comparing model performance. Covers experiment CRUD, exporting runs, comparing results, and evaluation workflows using the ax CLI. Use when the user mentions create experiment, run experiment, compare models, model performance, evaluate AI, experiment results, benchmark, A/B test models, or measure accuracy.
Arize Prompt Optimization
100Optimizes, improves, and debugs LLM prompts using production trace data, evaluations, and annotations. Extracts prompts from spans, gathers performance signal, and runs a data-driven optimization loop using the ax CLI. Use when the user mentions optimize prompt, improve prompt, make AI respond better, improve output quality, prompt engineering, prompt tuning, or system prompt improvement.
Prompt Optimization
100Applies prompt repetition to improve accuracy for non-reasoning LLMs
Vector Index Tuning
99Optimize vector index performance for latency, recall, and memory. Use when tuning HNSW parameters, selecting quantization strategies, or scaling vector search infrastructure.
Migrate Validate
100Validate pending migrations for foreign key consistency, rollback safety, and best practices