LLM Wiki
Skill Verified ActiveUse when building or maintaining a persistent personal knowledge base (second brain) in Obsidian where an LLM incrementally ingests sources, updates entity/concept pages, maintains cross-references, and keeps a synthesis current. Triggers include "second brain", "Obsidian wiki", "personal knowledge management", "ingest this paper/article/book", "build a research wiki", "compound knowledge", "Memex", or whenever the user wants knowledge to accumulate across sessions instead of being re-derived by RAG on every query.
To automate the creation and maintenance of a structured, interlinked personal knowledge base (second brain) in Obsidian, allowing knowledge to compound across sessions.
Features
- Incremental knowledge ingestion and compounding
- Automated entity/concept page creation and updates
- Maintenance of cross-references and synthesis
- Structured wiki-based knowledge management
- Workflow for ingestion, querying, and linting
Use Cases
- Building a persistent personal knowledge base (second brain)
- Maintaining a research wiki for deep dives
- Creating companion wikis for books or articles
- Automating internal knowledge base maintenance for teams
Non-Goals
- One-shot Q&A over fixed documents (use RAG)
- Replacing user curation of new sources
- Operating without an Obsidian vault
- Handling arbitrary file formats without user-LLM interaction
Workflow
- Initialize vault with schema and starter structure
- Add source files to the `raw/` directory
- Ingest source via `/wiki-ingest <path>` command
- Discuss proposed changes and confirm updates with the LLM
- Query the wiki using `/wiki-query <question>`
- Periodically run `/wiki-lint` to check wiki health
Practices
- Knowledge Management
- Documentation Best Practices
- Automated Maintenance
- Structured Data Organization
Prerequisites
- Obsidian vault directory
- Access to an LLM CLI (Claude Code, Codex, Cursor, etc.)
- Python 3.7+
Installation
First, add the marketplace
/plugin marketplace add alirezarezvani/claude-skills/plugin install llm-wiki@claude-code-skillsQuality Score
VerifiedTrust Signals
Similar Extensions
Lark Wiki CLI
100飞书知识库:管理知识空间、空间成员和文档节点。创建和查询知识空间、查看和管理空间成员、管理节点层级结构、在知识库中组织文档和快捷方式。当用户需要在知识库中查找或创建文档、浏览知识空间结构、查看或管理空间成员、移动或复制节点时使用。
LLM Council
99Provider-agnostic multi-LLM deliberation. Three phases — independent responses, cross-model anonymized ranking, chairman synthesis. Provider config from env (OPENAI/ANTHROPIC/FIREWORKS/OPENROUTER/custom OpenAI-compatible base URL). Persists transcript to a wiki page when --wiki <slug> is passed. Use when the user wants multiple AI perspectives, consensus-building, or the "LLM Council" approach for high-stakes reviews, plan critique, or contested learning rules.
Obsidian Vault Maintainer
95Maintain an Obsidian-friendly memory wiki vault with wikilinks, frontmatter, and official Obsidian CLI awareness.
ARA Research Manager
100Records research provenance as a post-task epilogue, scanning conversation history at the end of a coding or research session to extract decisions, experiments, dead ends, claims, heuristics, and pivots, and writing them into the ara/ directory with user-vs-AI provenance tags. Use as a session epilogue — never during execution — to maintain a faithful, auditable trace of how a research project actually evolved.
Orchestrate
100Wire Commands, Agents, and Skills together for complex features. Use when building features that need research, planning, and implementation phases.
Agent Teams
100Coordinate multiple Claude Code sessions as a team — lead + teammates with shared task lists, mailbox messaging, and file-lock claiming. Patterns for team sizing, task decomposition, and when to use teams vs sub-agents vs worktrees.