Llm Cost Optimizer
Plugin Verified ActiveUse when you need to reduce LLM API spend, control token usage, route between models by cost/quality, implement prompt caching, or build cost observability for AI features. Triggers: 'my AI costs are
Reduce LLM API spend and control token usage by intelligently routing requests, implementing prompt caching, and providing cost observability for AI features.
Features
- Reduce LLM API spend
- Control token usage
- Route between models by cost/quality
- Implement prompt caching
- Build cost observability for AI features
Use Cases
- Use when LLM API costs are a concern
- Use to optimize token usage for AI features
- Use to select appropriate models based on cost and quality
- Use to implement prompt caching for repeated queries
Non-Goals
- Improving prompt quality or effectiveness
- RAG pipeline design
- Designing generic AI endpoints without cost considerations
Installation
/plugin install llm-cost-optimizer@alirezarezvani-claude-skillsQuality Score
VerifiedTrust Signals
Similar Extensions
Claude Cost Optimizer
99Cost-conscious mode for Claude Code. Saves 30-60% on costs through concise responses, model routing, and efficient workflow patterns.
Aws Cost Ops
98AWS cost optimization, monitoring, and operational excellence with integrated MCP servers for billing, cost analysis, observability, and security assessment
Cost Mode
98Cost-conscious mode for Claude Code. Cuts filler, suggests cheaper models, encourages efficient patterns.