Session Compression
技能 活跃AI session compression techniques for managing multi-turn conversations efficiently through summarization, embedding-based retrieval, and intelligent context management.
To enable developers to efficiently manage long AI conversations, reduce token costs, and improve response times by implementing advanced session compression strategies.
功能
- Multi-turn conversation compression
- Abstractive and extractive summarization techniques
- Embedding-based retrieval patterns (RAG)
- Progressive compression and hybrid approaches
- LangChain memory integration and Anthropic prompt caching
使用场景
- Managing customer support chats that exceed context windows
- Building AI tutors that retain student progress across sessions
- Reducing token costs for high-volume LLM applications
- Developing code assistants that track long development sessions
非目标
- Providing a runtime tool that automatically compresses conversations
- Replacing core LLM functionality
- Handling non-textual conversation data
Maintenance
- warning:Dependency ManagementWhile dependencies like LangChain and OpenAI are used, there are no explicit lockfiles or vulnerability scanning mechanisms mentioned for them.
Security
- warning:Secret ManagementCode examples show API keys being passed directly or referenced via placeholders, which is not ideal for production and lacks secure handling mechanisms.
Trust
- info:Issues AttentionThere are 4 issues opened and 0 closed in the last 90 days, indicating low recent activity on issue resolution.
Execution
- warning:Pinned dependenciesWhile libraries like LangChain are used, there is no explicit mention or demonstration of pinned dependencies or lockfiles for these third-party packages.
安装
npx skills add bobmatnyc/claude-mpm-skills通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。
质量评分
类似扩展
Context Compression
100This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.
TradeMemory Protocol
100Evolution Engine 的领域知识 — 支持 LLM 从原始 OHLCV 数据中自主发现策略。涵盖生成-回测-选择-进化循环、向量化回测、样本外验证和策略梯度。在发现交易模式、运行回测、进化策略或审查进化日志时使用。由“evolve”、“discover patterns”、“backtest”、“evolution”、“strategy generation”、“candidate strategy”触发。
Chat Format
100Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval
Rag Architect
100Use when the user asks to design RAG pipelines, optimize retrieval strategies, choose embedding models, implement vector search, or build knowledge retrieval systems.
Trader Regime
100Detect current market regime using npx neural-trader — bull/bear/ranging/volatile classification with recommended strategy
Trading Memory
100AI交易记忆的领域知识 — 结果加权记忆 (OWM) 架构、5种记忆类型、回忆评分和行为分析。用于记录交易、回忆相似的上下文、分析性能或检查行为漂移。在 "record trade"、"remember trade"、"recall"、"similar trades"、"performance"、"behavioral"、"disposition"、"affective state"、"confidence" 时触发。