Wiki Query
Skill Verified ActiveQuery pro-workflow wikis via SQLite FTS5 BM25 retrieval. Returns top-K passages with citations. Use when answering a question that any of the user's wikis already covers, when the user says "what does the wiki say about X", "ask wiki", "search wikis", or before drafting a new wiki page (to avoid duplication).
To enable users to efficiently search and retrieve information from their personal or project wikis, preventing duplicated effort and ensuring accurate citation.
Features
- BM25 retrieval over SQLite FTS5 indexed wikis
- Returns top-K passages with citations
- Search across all wikis or specific ones
- Find related wiki pages based on title/summary
- Display full wiki page content
Use Cases
- Answering questions that a user's wikis already cover
- Searching wikis when the user says 'what does the wiki say about X'
- Before drafting a new wiki page to avoid duplication
- Verifying citations before quoting a claim
Non-Goals
- Providing semantic vector search (planned for future versions)
- Real-time re-ranking of search results (planned for future versions)
- Operating on external web content or non-local data
Versioning
- info:Release ManagementThe README mentions `v3.3` and `v3.3.0`, and the `npm` badge shows a version. However, there are no explicit versioning fields in the SKILL.md frontmatter or clear changelog entries tied to specific versions.
Execution
- info:Pinned dependenciesWhile `npm install` is mentioned, there is no explicit lockfile or pinned dependency version declared in the provided source files.
Installation
First, add the marketplace
/plugin marketplace add rohitg00/pro-workflow/plugin install pro-workflow@pro-workflowQuality Score
VerifiedTrust Signals
Similar Extensions
Lark Wiki CLI
100飞书知识库:管理知识空间、空间成员和文档节点。创建和查询知识空间、查看和管理空间成员、管理节点层级结构、在知识库中组织文档和快捷方式。当用户需要在知识库中查找或创建文档、浏览知识空间结构、查看或管理空间成员、移动或复制节点时使用。
Rag Architect
100Use when the user asks to design RAG pipelines, optimize retrieval strategies, choose embedding models, implement vector search, or build knowledge retrieval systems.
LLM Wiki
100Use when building or maintaining a persistent personal knowledge base (second brain) in Obsidian where an LLM incrementally ingests sources, updates entity/concept pages, maintains cross-references, and keeps a synthesis current. Triggers include "second brain", "Obsidian wiki", "personal knowledge management", "ingest this paper/article/book", "build a research wiki", "compound knowledge", "Memex", or whenever the user wants knowledge to accumulate across sessions instead of being re-derived by RAG on every query.
LLM Council
99Provider-agnostic multi-LLM deliberation. Three phases — independent responses, cross-model anonymized ranking, chairman synthesis. Provider config from env (OPENAI/ANTHROPIC/FIREWORKS/OPENROUTER/custom OpenAI-compatible base URL). Persists transcript to a wiki page when --wiki <slug> is passed. Use when the user wants multiple AI perspectives, consensus-building, or the "LLM Council" approach for high-stakes reviews, plan critique, or contested learning rules.
Embeddings
99Vector embeddings with HNSW indexing, sql.js persistence, and hyperbolic support. 75x faster with agentic-flow integration. Use when: semantic search, pattern matching, similarity queries, knowledge retrieval. Skip when: exact text matching, simple lookups, no semantic understanding needed.
Wiki Builder
99Start, structure, and grow a persistent research wiki indexed in pro-workflow's SQLite knowledge base. Each wiki is a folder of markdown pages with provenance, plus a shadow FTS5 index so any session can recall it. Use when the user says "start a wiki", "add to wiki", "compile a page", "wiki on X", or wants a long-lived knowledge base on a topic, paper, product, person, project, or codebase.