Claude Mem
Skill Verified ActiveSearch claude-mem's persistent cross-session memory database. Use when user asks "did we already solve this?", "how did we do X last time?", or needs work from previous sessions.
To allow Claude Code users to easily recall and leverage past work and knowledge across different sessions, improving continuity and reducing redundant effort.
Features
- Persistent cross-session memory storage
- Advanced search and filtering capabilities
- Chronological timeline retrieval around observations
- Efficient token usage via progressive disclosure workflow
- Integration with Claude Code and MCP tools
Use Cases
- Recalling solutions to previously encountered problems
- Reviewing past project decisions and discoveries
- Finding specific information from previous work sessions
- Maintaining context across long-term projects
Non-Goals
- Acting as a general-purpose task manager
- Storing real-time conversation context
- Replacing the primary Claude Code conversation history
- Providing project management or bug tracking features
Workflow
- Search memory index for relevant observations using `search` tool.
- Filter results by query, type, date, project, etc.
- Use `timeline` tool to get chronological context around interesting results.
- Select relevant observation IDs based on search and timeline context.
- Fetch full details of selected observations using `get_observations` tool.
Practices
- Memory management
- Search indexing
- Context retrieval
Prerequisites
- Node.js 18.0.0 or higher
- Claude Code with plugin support
- Bun (auto-installed if missing)
- uv (auto-installed if missing)
Installation
npx skills add thedotmack/claude-memRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
VerifiedTrust Signals
Similar Extensions
Memory Bank
99Token-efficient persistent memory system for Claude Code that saves ~67% tokens on session warm-up (verified with tiktoken). Layered architecture with progressive loading, compact encoding, branch-aware context, smart compression, session diffing, conflict detection, session continuation protocol, and recovery mode. Activates at session start (if MEMORY.md exists), on "remember this", "pick up where we left off", "what were we doing", "wrap up", "save progress", "don't forget", "switch context", "hand off", "memory health", "save state", "continue where I left off", "context budget", "how much context left", or any session start on a project with existing memory files. This skill solves two problems at once: Claude forgetting everything between sessions, AND sessions hitting context limits too fast. It replaces thousands of wasted re-explanation tokens with a compact, structured memory load that gives Claude full project context in under 2,000 tokens.
Validate Plugin
100Validate a Claude Code plugin structure, frontmatter, and MCP tool references
Running Claude Code Via Litellm Copilot
100Use when routing Claude Code through a local LiteLLM proxy to GitHub Copilot, reducing direct Anthropic spend, configuring ANTHROPIC_BASE_URL or ANTHROPIC_MODEL overrides, or troubleshooting Copilot proxy setup failures such as model-not-found, no localhost traffic, or GitHub 401/403 auth errors.
Guard
100Protect Claude Code sessions from context overflow by running a background daemon that monitors session size and auto-prunes before compaction hits. Use when the user says "guard", "protect session", "context getting long", "prevent compaction", "session management", or is running agent teams that need continuous context protection.
Create Command
100Interactive assistant for creating new Claude commands with proper structure, patterns, and MCP tool integration
Rule Effectiveness Analysis
100Analyze which rules are actively used vs inert. Detect coverage gaps. Recommend pruning to reduce token consumption.