Instructor
Skill ActiveExtract structured data from LLM responses with Pydantic validation, retry failed extractions automatically, parse complex JSON with type safety, and stream partial results with Instructor - battle-tested structured output library
To reliably extract and validate structured data from LLM responses, enabling safer and more predictable integration of LLM outputs into applications.
Features
- Extract structured data with Pydantic models
- Automatic validation of LLM outputs
- Retry failed extractions with error feedback
- Parse complex JSON with type safety
- Stream partial results for real-time processing
Use Cases
- Extracting user profiles, product details, or financial data from text
- Classifying text into predefined categories with confidence scores
- Parsing complex nested JSON outputs from LLMs
- Building real-time applications that consume LLM-generated structured data
Non-Goals
- Replacing the core LLM provider itself
- Performing complex data transformations beyond validation
- Handling LLM API calls directly without Instructor's structured output features
Trust
- warning:Issues AttentionThere are 17 open issues and 4 closed issues in the last 90 days, indicating a low closure rate and potentially slow maintainer response.
Installation
npx skills add davila7/claude-code-templatesRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
Trust Signals
Similar Extensions
Instructor
98Extract structured data from LLM responses with Pydantic validation, retry failed extractions automatically, parse complex JSON with type safety, and stream partial results with Instructor - battle-tested structured output library
Arize Prompt Optimization
100Optimizes, improves, and debugs LLM prompts using production trace data, evaluations, and annotations. Extracts prompts from spans, gathers performance signal, and runs a data-driven optimization loop using the ax CLI. Use when the user mentions optimize prompt, improve prompt, make AI respond better, improve output quality, prompt engineering, prompt tuning, or system prompt improvement.
Prompt Optimization
100Applies prompt repetition to improve accuracy for non-reasoning LLMs
Create Atomic Tool
99Build a `BaseTool[InSchema, OutSchema]` subclass — input/output schemas, `BaseToolConfig`, `run()` (and optional `run_async()`), env-driven secrets, typed failure outputs. Use when the user asks to "add a tool", "create a tool", "wrap an API as a tool", "build a `BaseTool`", "make a calculator/search/weather tool", or runs `/atomic-agents:create-atomic-tool`.
Guidance
99Control LLM output with regex and grammars, guarantee valid JSON/XML/code generation, enforce structured formats, and build multi-step workflows with Guidance - Microsoft Research's constrained generation framework
Create Atomic Schema
98Design and write a `BaseIOSchema` input/output pair for an Atomic Agents agent or tool — docstrings, field descriptions, validators, error variants. Use when the user asks to "create a schema", "design the input/output schema", "define an `IOSchema`", "write a `BaseIOSchema`", "model the agent's output", or runs `/atomic-agents:create-atomic-schema`.