此内容尚未提供您的语言版本,正在以英文显示。
Llm Cost Optimizer
插件 已验证 活跃Use when you need to reduce LLM API spend, control token usage, route between models by cost/quality, implement prompt caching, or build cost observability for AI features. Triggers: 'my AI costs are
1 个 Skill 0 个 MCP
目的
Reduce LLM API spend and control token usage by intelligently routing requests, implementing prompt caching, and providing cost observability for AI features.
功能
- Reduce LLM API spend
- Control token usage
- Route between models by cost/quality
- Implement prompt caching
- Build cost observability for AI features
使用场景
- Use when LLM API costs are a concern
- Use to optimize token usage for AI features
- Use to select appropriate models based on cost and quality
- Use to implement prompt caching for repeated queries
非目标
- Improving prompt quality or effectiveness
- RAG pipeline design
- Designing generic AI endpoints without cost considerations
安装
/plugin install llm-cost-optimizer@alirezarezvani-claude-skills质量评分
已验证99 /100
1 day ago 分析