Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

Agent Evaluation

Skill Verifiziert Aktiv

Evaluate and improve Claude Code commands, skills, and agents. Use when testing prompt effectiveness, validating context engineering choices, or measuring improvement quality.

Zweck

To empower users with systematic methods and best practices for evaluating and enhancing the performance, reliability, and quality of AI agents and their components.

Funktionen

  • Structured evaluation methodologies (LLM-as-Judge, Human Eval)
  • Comprehensive rubric design with scoring guidelines
  • Techniques for mitigating LLM evaluation biases
  • Practical prompt patterns and workflow examples
  • Guidance on test case design and iteration

Anwendungsfälle

  • Testing prompt effectiveness for AI agents
  • Validating context engineering choices
  • Measuring improvement quality of AI outputs
  • Developing robust evaluation pipelines for AI systems

Nicht-Ziele

  • Developing AI agents themselves
  • Automating all aspects of AI evaluation without human oversight
  • Providing domain-specific evaluation rubrics outside of general AI agent assessment

Praktiken

  • Evaluation methodology
  • Prompt engineering
  • Test design
  • Bias mitigation

Versioning

  • info:Release ManagementWhile the trust signals indicate a recent commit date, there is no explicit versioning declared in the manifest or CHANGELOG, and installation instructions reference 'main'.

Installation

Zuerst Marketplace hinzufügen

/plugin marketplace add NeoLabHQ/context-engineering-kit
/plugin install customaize-agent@context-engineering-kit

Qualitätspunktzahl

Verifiziert
99 /100
Analysiert 1 day ago

Vertrauenssignale

Letzter Commit9 days ago
Sterne993
LizenzGPL-3.0
Status
Quellcode ansehen

Ähnliche Erweiterungen

Create Command

100

Interactive assistant for creating new Claude commands with proper structure, patterns, and MCP tool integration

Skill
NeoLabHQ

Project Development

100

This skill should be used when the user asks to "start an LLM project", "design batch pipeline", "evaluate task-model fit", "structure agent project", or mentions pipeline architecture, agent-assisted development, cost estimation, or choosing between LLM and traditional approaches.

Skill
muratcankoylan

Write A Skill

100

Create new agent skills with proper structure, progressive disclosure, and bundled resources. Use when user wants to create, write, or build a new skill.

Skill
mattpocock

Context Compression

100

This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.

Skill
muratcankoylan

Arize Prompt Optimization

100

Optimizes, improves, and debugs LLM prompts using production trace data, evaluations, and annotations. Extracts prompts from spans, gathers performance signal, and runs a data-driven optimization loop using the ax CLI. Use when the user mentions optimize prompt, improve prompt, make AI respond better, improve output quality, prompt engineering, prompt tuning, or system prompt improvement.

Skill
github

Prompt Optimization

100

Wendet Prompt-Wiederholung an, um die Genauigkeit für LLMs ohne Schlussfolgerungsfähigkeit zu verbessern

Skill
asklokesh