Session Compression
Skill ActiveAI session compression techniques for managing multi-turn conversations efficiently through summarization, embedding-based retrieval, and intelligent context management.
To enable developers to efficiently manage long AI conversations, reduce token costs, and improve response times by implementing advanced session compression strategies.
Features
- Multi-turn conversation compression
- Abstractive and extractive summarization techniques
- Embedding-based retrieval patterns (RAG)
- Progressive compression and hybrid approaches
- LangChain memory integration and Anthropic prompt caching
Use Cases
- Managing customer support chats that exceed context windows
- Building AI tutors that retain student progress across sessions
- Reducing token costs for high-volume LLM applications
- Developing code assistants that track long development sessions
Non-Goals
- Providing a runtime tool that automatically compresses conversations
- Replacing core LLM functionality
- Handling non-textual conversation data
Maintenance
- warning:Dependency ManagementWhile dependencies like LangChain and OpenAI are used, there are no explicit lockfiles or vulnerability scanning mechanisms mentioned for them.
Security
- warning:Secret ManagementCode examples show API keys being passed directly or referenced via placeholders, which is not ideal for production and lacks secure handling mechanisms.
Trust
- info:Issues AttentionThere are 4 issues opened and 0 closed in the last 90 days, indicating low recent activity on issue resolution.
Execution
- warning:Pinned dependenciesWhile libraries like LangChain are used, there is no explicit mention or demonstration of pinned dependencies or lockfiles for these third-party packages.
Installation
npx skills add bobmatnyc/claude-mpm-skillsRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
Similar Extensions
Context Compression
100This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.
TradeMemory Protocol
100Domain knowledge for the Evolution Engine — LLM-powered autonomous strategy discovery from raw OHLCV data. Covers the generate-backtest-select-evolve loop, vectorized backtesting, out-of-sample validation, and strategy graduation. Use when discovering trading patterns, running backtests, evolving strategies, or reviewing evolution logs. Triggers on "evolve", "discover patterns", "backtest", "evolution", "strategy generation", "candidate strategy".
Chat Format
100Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval
Rag Architect
100Use when the user asks to design RAG pipelines, optimize retrieval strategies, choose embedding models, implement vector search, or build knowledge retrieval systems.
Trader Regime
100Detect current market regime using npx neural-trader — bull/bear/ranging/volatile classification with recommended strategy
Trading Memory
100Domain knowledge for AI trading memory — Outcome-Weighted Memory (OWM) architecture, 5 memory types, recall scoring, and behavioral analysis. Use when recording trades, recalling similar contexts, analyzing performance, or checking behavioral drift. Triggers on "record trade", "remember trade", "recall", "similar trades", "performance", "behavioral", "disposition", "affective state", "confidence".