Guidance
技能 活跃Control LLM output with regex and grammars, guarantee valid JSON/XML/code generation, enforce structured formats, and build multi-step workflows with Guidance - Microsoft Research's constrained generation framework
To provide developers with a powerful tool for precisely controlling LLM output, guaranteeing valid structured formats like JSON and XML, and building complex multi-step generation workflows.
功能
- Control LLM output syntax with regex and grammars
- Guarantee valid JSON/XML/code generation
- Enforce structured formats (dates, emails, IDs)
- Build multi-step workflows with Pythonic control flow
- Reduce latency vs traditional prompting
使用场景
- When you need to guarantee valid JSON output for an API response.
- When enforcing a specific date or email format for user input.
- When building agent workflows that require structured intermediate thoughts or observations.
- When reducing token waste and latency by directly generating valid outputs.
非目标
- Performing LLM inference directly; it relies on configured backends.
- Handling arbitrary file system operations or system commands.
- Replacing the core functionality of LLM providers like OpenAI or Anthropic.
Trust
- warning:Issues AttentionIn the last 90 days, 17 issues were opened and 4 were closed, indicating a low closure rate of 23.5%, suggesting slow response times for open issues.
Execution
- info:Pinned dependenciesDependencies are listed in SKILL.md but not explicitly pinned with lockfiles, which could lead to versioning conflicts.
安装
npx skills add davila7/claude-code-templates通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。
质量评分
类似扩展
Guidance
99Control LLM output with regex and grammars, guarantee valid JSON/XML/code generation, enforce structured formats, and build multi-step workflows with Guidance - Microsoft Research's constrained generation framework
Arize Prompt Optimization
100Optimizes, improves, and debugs LLM prompts using production trace data, evaluations, and annotations. Extracts prompts from spans, gathers performance signal, and runs a data-driven optimization loop using the ax CLI. Use when the user mentions optimize prompt, improve prompt, make AI respond better, improve output quality, prompt engineering, prompt tuning, or system prompt improvement.
Prompt Optimization
100应用提示重复以提高非推理 LLM 的准确性
Teach Guidance
100Guide a person in becoming a better teacher and explainer. AI coaches content structuring, audience calibration, explanation clarity, Socratic questioning technique, feedback interpretation, and reflective practice for technical presentations, documentation, and mentoring. Use when a person needs to present technical content and wants preparation coaching, wants to write better documentation or tutorials, struggles to explain concepts across expertise levels, is mentoring a colleague, or is preparing for a talk or knowledge-sharing session.
Chat Format
100Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly