Prompt Engineering
技能 已验证 活跃Use this skill when you writing commands, hooks, skills for Agent, or prompts for sub agents or any other LLM interaction, including optimizing prompts, improving LLM outputs, or designing production prompt templates.
Enhance LLM performance, reliability, and controllability through advanced prompt engineering techniques and structured communication patterns.
功能
- Few-Shot Learning examples and usage
- Chain-of-Thought prompting for complex reasoning
- Prompt optimization strategies and iteration
- Template systems for reusable prompt structures
- System prompt design for stable instructions
使用场景
- Teaching LLMs through examples for consistent formatting or reasoning.
- Improving accuracy on complex analytical tasks by requesting step-by-step reasoning.
- Systematically refining prompts for production use to ensure consistency and cost-efficiency.
- Building reusable prompt structures for multi-turn conversations or role-based interactions.
非目标
- Generating code directly.
- Interacting with external services or APIs.
- Managing project dependencies or file structures.
工作流
- Understand prompt engineering principles.
- Apply techniques like Few-Shot Learning or Chain-of-Thought.
- Optimize prompts through testing and iteration.
- Design reusable prompt templates and system prompts.
- Integrate with LLM workflows for improved results.
实践
- Prompt engineering
- LLM interaction design
- Agent communication
安装
请先添加 Marketplace
/plugin marketplace add NeoLabHQ/context-engineering-kit/plugin install customaize-agent@context-engineering-kit质量评分
已验证类似扩展
Arize Prompt Optimization
100Optimizes, improves, and debugs LLM prompts using production trace data, evaluations, and annotations. Extracts prompts from spans, gathers performance signal, and runs a data-driven optimization loop using the ax CLI. Use when the user mentions optimize prompt, improve prompt, make AI respond better, improve output quality, prompt engineering, prompt tuning, or system prompt improvement.
Prompt Optimization
100应用提示重复以提高非推理 LLM 的准确性
Context Compression
100This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.
Unsloth
100Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization
CE Optimize
100Run metric-driven iterative optimization loops -- define a measurable goal, run parallel experiments, measure each against hard gates or LLM-as-judge scores, keep improvements, and converge on the best solution. Use when optimizing clustering quality, search relevance, build performance, prompt quality, or any measurable outcome that benefits from systematic experimentation.
Mcp Setup
100Configure popular MCP servers for enhanced agent capabilities