跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

Llm Cost Optimizer

插件 已验证 活跃

Use when you need to reduce LLM API spend, control token usage, route between models by cost/quality, implement prompt caching, or build cost observability for AI features. Triggers: 'my AI costs are

1 个 Skill 0 个 MCP
目的

Reduce LLM API spend and control token usage by intelligently routing requests, implementing prompt caching, and providing cost observability for AI features.

功能

  • Reduce LLM API spend
  • Control token usage
  • Route between models by cost/quality
  • Implement prompt caching
  • Build cost observability for AI features

使用场景

  • Use when LLM API costs are a concern
  • Use to optimize token usage for AI features
  • Use to select appropriate models based on cost and quality
  • Use to implement prompt caching for repeated queries

非目标

  • Improving prompt quality or effectiveness
  • RAG pipeline design
  • Designing generic AI endpoints without cost considerations

安装

/plugin install llm-cost-optimizer@alirezarezvani-claude-skills

质量评分

已验证
99 /100
1 day ago 分析

信任信号

最近提交1 day ago
星标14.6k
许可证MIT
状态
查看源代码