Prompt Optimization
Skill Verified ActiveApplies prompt repetition to improve accuracy for non-reasoning LLMs
To improve the accuracy of non-reasoning LLMs on structured tasks by automatically repeating their prompts.
Features
- Applies prompt repetition to increase LLM accuracy
- Automatic activation for Haiku agents on specific tasks
- Configurable repetition count and enablement
- Zero latency impact due to prefill stage operation
- Provides performance metrics and configuration options
Use Cases
- Improving accuracy for unit tests
- Enhancing reliability of linting and formatting tasks
- Increasing precision in parsing and data extraction
- Ensuring accuracy in list operations (find, filter, count)
Non-Goals
- Improving accuracy for reasoning LLMs (Opus, Sonnet)
- Tasks outside of structured operations
- Introducing latency penalties
Installation
npx skills add asklokesh/loki-modeRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
VerifiedTrust Signals
Similar Extensions
Arize Prompt Optimization
100Optimizes, improves, and debugs LLM prompts using production trace data, evaluations, and annotations. Extracts prompts from spans, gathers performance signal, and runs a data-driven optimization loop using the ax CLI. Use when the user mentions optimize prompt, improve prompt, make AI respond better, improve output quality, prompt engineering, prompt tuning, or system prompt improvement.
Unsloth
100Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization
CE Optimize
100Run metric-driven iterative optimization loops -- define a measurable goal, run parallel experiments, measure each against hard gates or LLM-as-judge scores, keep improvements, and converge on the best solution. Use when optimizing clustering quality, search relevance, build performance, prompt quality, or any measurable outcome that benefits from systematic experimentation.
Chat Format
100Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly
Wrap Up Ritual
100End-of-session ritual that audits changes, runs quality checks, captures learnings, and produces a session summary. Use when saying "wrap up", "done for the day", "finish coding", or ending a coding session.