Skip to main content

Instructor

Skill Active

Extract structured data from LLM responses with Pydantic validation, retry failed extractions automatically, parse complex JSON with type safety, and stream partial results with Instructor - battle-tested structured output library

Purpose

To reliably extract and validate structured data from LLM responses, enabling safer and more predictable integration of LLM outputs into applications.

Features

  • Extract structured data with Pydantic models
  • Automatic validation of LLM outputs
  • Retry failed extractions with error feedback
  • Parse complex JSON with type safety
  • Stream partial results for real-time processing

Use Cases

  • Extracting user profiles, product details, or financial data from text
  • Classifying text into predefined categories with confidence scores
  • Parsing complex nested JSON outputs from LLMs
  • Building real-time applications that consume LLM-generated structured data

Non-Goals

  • Replacing the core LLM provider itself
  • Performing complex data transformations beyond validation
  • Handling LLM API calls directly without Instructor's structured output features

Trust

  • warning:Issues AttentionThere are 17 open issues and 4 closed issues in the last 90 days, indicating a low closure rate and potentially slow maintainer response.

Installation

npx skills add davila7/claude-code-templates

Runs the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.

Quality Score

75 /100
Analyzed 1 day ago

Trust Signals

Last commit1 day ago
Stars27.2k
LicenseMIT
Status
View Source

Similar Extensions

Instructor

98

Extract structured data from LLM responses with Pydantic validation, retry failed extractions automatically, parse complex JSON with type safety, and stream partial results with Instructor - battle-tested structured output library

Skill
Orchestra-Research

Arize Prompt Optimization

100

Optimizes, improves, and debugs LLM prompts using production trace data, evaluations, and annotations. Extracts prompts from spans, gathers performance signal, and runs a data-driven optimization loop using the ax CLI. Use when the user mentions optimize prompt, improve prompt, make AI respond better, improve output quality, prompt engineering, prompt tuning, or system prompt improvement.

Skill
github

Prompt Optimization

100

Applies prompt repetition to improve accuracy for non-reasoning LLMs

Skill
asklokesh

Create Atomic Tool

99

Build a `BaseTool[InSchema, OutSchema]` subclass — input/output schemas, `BaseToolConfig`, `run()` (and optional `run_async()`), env-driven secrets, typed failure outputs. Use when the user asks to "add a tool", "create a tool", "wrap an API as a tool", "build a `BaseTool`", "make a calculator/search/weather tool", or runs `/atomic-agents:create-atomic-tool`.

Skill
BrainBlend-AI

Guidance

99

Control LLM output with regex and grammars, guarantee valid JSON/XML/code generation, enforce structured formats, and build multi-step workflows with Guidance - Microsoft Research's constrained generation framework

Skill
Orchestra-Research

Create Atomic Schema

98

Design and write a `BaseIOSchema` input/output pair for an Atomic Agents agent or tool — docstrings, field descriptions, validators, error variants. Use when the user asks to "create a schema", "design the input/output schema", "define an `IOSchema`", "write a `BaseIOSchema`", "model the agent's output", or runs `/atomic-agents:create-atomic-schema`.

Skill
BrainBlend-AI

© 2025 SkillRepo · Find the right skill, skip the noise.