Skip to main content

Reflexion

Plugin Warning Active

Collection of commands that force LLM to reflect on previous response and output. Based on papers like Self-Refine and Reflexion. These techniques improve the output of large language models by introducing feedback and refinement loops.

3 Skills 0 MCPs
Purpose

To significantly enhance the quality, predictability, and accuracy of LLM outputs by integrating advanced reflection, critique, and memory update mechanisms.

Features

  • Automatic reflection triggered by keywords
  • Commands for self-reflection, critique, and memory curation
  • Integration with Agentic Context Engineering (ACE)
  • Multi-agent debate and LLM-as-a-Judge for critique
  • Session data persistence for hook analysis

Use Cases

  • Improving code generation quality through iterative refinement
  • Ensuring LLM outputs strictly adhere to requirements
  • Creating evolving knowledge bases for AI agents
  • Automating quality assurance for LLM-generated content

Non-Goals

  • Automatically fixing identified issues without user/LLM approval
  • Providing generic LLM features not related to output improvement
  • Replacing the core LLM's reasoning capabilities

License

  • critical:License usabilityThe plugin is licensed under GPL-3.0, which is a strong copyleft license that may restrict its use in commercial or proprietary software without careful consideration and compliance.

Installation

First, add the marketplace

/plugin marketplace add NeoLabHQ/context-engineering-kit
/plugin install reflexion@context-engineering-kit

Quality Score

Warning
75 /100
Analyzed about 19 hours ago

Trust Signals

Last commit8 days ago
Stars993
LicenseGPL-3.0
Status
View Source

© 2025 SkillRepo · Find the right skill, skip the noise.