Context Optimization
Skill Verified ActiveThis skill should be used when the user asks to "optimize context", "reduce token costs", "improve context efficiency", "implement KV-cache optimization", "partition context", or mentions context limits, observation masking, context budgeting, or extending effective context capacity.
To teach users how to maximize LLM effectiveness and reduce costs by intelligently managing and optimizing the information within the context window.
Features
- Context compaction strategies
- Observation masking for verbose outputs
- KV-cache optimization for prompt stability
- Context partitioning for large tasks
- Token budget management and monitoring
Use Cases
- Reducing token costs in long conversations or with large documents
- Improving agent latency by optimizing context efficiency
- Implementing production-grade agent systems at scale
- Handling complex tasks that push context window limits
Non-Goals
- Implementing specific LLM models or inference engines
- Replacing prompt engineering; rather, it complements it
- Providing a direct API for automated context manipulation (focus is on guidance)
Installation
First, add the marketplace
/plugin marketplace add muratcankoylan/Agent-Skills-for-Context-Engineering/plugin install Agent-Skills-for-Context-Engineering@context-engineering-marketplaceQuality Score
VerifiedTrust Signals
Similar Extensions
Latent Briefing
75This skill should be used when the user asks to "share memory between agents", "KV cache compaction for multi-agent", "orchestrator worker context", "latent briefing", "reduce worker tokens", "cross-agent memory without summarization", or discusses Attention Matching compaction, recursive language models with workers, or token explosion in hierarchical agents.
Context Compression
100This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.
Init
100Creates, updates, or optimizes an AGENTS.md file for a repository with minimal, high-signal instructions covering non-discoverable coding conventions, tooling quirks, workflow preferences, and project-specific rules that agents cannot infer from reading the codebase. Use when setting up agent instructions or Claude configuration for a new repository, when an existing AGENTS.md is too long, generic, or stale, when agents repeatedly make avoidable mistakes, or when repository workflows have changed and the agent configuration needs pruning. Applies a discoverability filter—omitting anything Claude can learn from README, code, config, or directory structure—and a quality gate to verify each line remains accurate and operationally significant.
Skill Template
99Template for creating new Agent Skills for context engineering. Use this template when adding new skills to the collection.
Digital Brain
99This skill should be used when the user asks to "write a post", "check my voice", "look up contact", "prepare for meeting", "weekly review", "track goals", or mentions personal brand, content creation, network management, or voice consistency.
Cost Mode
99Cost-conscious Claude Code mode. Reduces output tokens 40-70% and overall costs 30-60% by enforcing concise responses, smart model routing, and efficient workflow patterns. Keeps full technical accuracy. Activate with /cost-mode or "enable cost mode". Auto-triggers on mentions of budget, cost, tokens, or spending.