Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

Llm Cost Optimizer

Plugin Verifiziert Aktiv

Use when you need to reduce LLM API spend, control token usage, route between models by cost/quality, implement prompt caching, or build cost observability for AI features. Triggers: 'my AI costs are

1 Skill 0 MCPs
Zweck

Reduce LLM API spend and control token usage by intelligently routing requests, implementing prompt caching, and providing cost observability for AI features.

Funktionen

  • Reduce LLM API spend
  • Control token usage
  • Route between models by cost/quality
  • Implement prompt caching
  • Build cost observability for AI features

Anwendungsfälle

  • Use when LLM API costs are a concern
  • Use to optimize token usage for AI features
  • Use to select appropriate models based on cost and quality
  • Use to implement prompt caching for repeated queries

Nicht-Ziele

  • Improving prompt quality or effectiveness
  • RAG pipeline design
  • Designing generic AI endpoints without cost considerations

Installation

/plugin install llm-cost-optimizer@alirezarezvani-claude-skills

Qualitätspunktzahl

Verifiziert
99 /100
Analysiert about 22 hours ago

Vertrauenssignale

Letzter Commit1 day ago
Sterne14.6k
LizenzMIT
Status
Quellcode ansehen