Caveman
Plugin Verified ActiveUltra-compressed communication mode. Cuts ~75% of tokens while keeping full technical accuracy by speaking like a caveman.
To significantly reduce LLM token consumption and improve response speed by employing a compressed, caveman-like communication style.
Features
- Ultra-compressed communication mode
- Multiple intensity levels (lite, full, ultra, wenyan)
- Dedicated skills for commit messages, code reviews, and file compression
- Automatic statusline integration and session activation
- Cross-platform installer
Use Cases
- Reducing LLM API costs by minimizing token usage
- Speeding up LLM response times
- Generating concise commit messages and code review comments
- Compressing local memory files to fit more context
- Integrating with various LLM agents and IDEs
Non-Goals
- Reducing the LLM's internal thinking/reasoning tokens
- Altering the LLM's core knowledge or capabilities
- Providing complex code editing or refactoring features beyond simple edits
- Replacing the LLM's default verbose output for sensitive or complex explanations
Practices
- Code compression
- LLM interaction optimization
- Developer productivity
Prerequisites
- Node.js ≥18
- Claude Code or compatible agent
- Optional: `git` for commit/review features
Installation
First, add the marketplace
/plugin marketplace add juliusbrussee/caveman/plugin install caveman@cavemanContains 6 extensions
Skill (6)
Ultra-compressed communication mode. Cuts token usage ~75% by speaking like caveman while keeping full technical accuracy. Supports intensity levels: lite, full (default), ultra, wenyan-lite, wenyan-full, wenyan-ultra. Use when user says "caveman mode", "talk like caveman", "use caveman", "less tokens", "be brief", or invokes /caveman. Also auto-triggers when token efficiency is requested.
Compress natural language memory files (CLAUDE.md, todos, preferences) into caveman format to save input tokens. Preserves all technical substance, code, URLs, and structure. Compressed version overwrites the original file. Human-readable backup saved as FILE.original.md. Trigger: /caveman-compress FILEPATH or "compress memory file"
Show real token usage and estimated savings for the current session. Reads directly from the Claude Code session log — no AI estimation. Triggers on /caveman-stats. Output is injected by the mode-tracker hook; the model itself does not compute the numbers.
Ultra-compressed commit message generator. Cuts noise from commit messages while preserving intent and reasoning. Conventional Commits format. Subject ≤50 chars, body only when "why" isn't obvious. Use when user says "write a commit", "commit message", "generate commit", "/commit", or invokes /caveman-commit. Auto-triggers when staging changes.
Quick-reference card for all caveman modes, skills, and commands. One-shot display, not a persistent mode. Trigger: /caveman-help, "caveman help", "what caveman commands", "how do I use caveman".
Ultra-compressed code review comments. Cuts noise from PR feedback while preserving the actionable signal. Each comment is one line: location, problem, fix. Use when user says "review this PR", "code review", "review the diff", "/review", or invokes /caveman-review. Auto-triggers when reviewing pull requests.
Quality Score
VerifiedTrust Signals
Similar Extensions
Context7 Plugin
100Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Kanban
100Markdown-based Kanban board managed by Claude Code. Cards live as .md files — no database, no server.
Obey
100Makes Claude actually follow your rules. Save rules with natural language, enforce them with hooks, remember them across sessions.
Unslop
100Make assistant output sound human. Strip AI-isms (sycophancy, stock vocab, hedging stacks, em-dash pileups), engineer burstiness, restore voice. Preserves code, URLs, and technical accuracy.
Ai Skills
99Claude Code expertise: skills, commands, hooks, MCP, settings (7 skills)
Toon Formatter
97TOON format for 30-60% token savings on tabular data