LLM Council
技能 已验证 活跃Provider-agnostic multi-LLM deliberation. Three phases — independent responses, cross-model anonymized ranking, chairman synthesis. Provider config from env (OPENAI/ANTHROPIC/FIREWORKS/OPENROUTER/custom OpenAI-compatible base URL). Persists transcript to a wiki page when --wiki <slug> is passed. Use when the user wants multiple AI perspectives, consensus-building, or the "LLM Council" approach for high-stakes reviews, plan critique, or contested learning rules.
To enable users to leverage multiple AI perspectives for high-stakes reviews, consensus-building, or critical decision-making by facilitating a structured multi-LLM deliberation process.
功能
- Provider-agnostic multi-LLM deliberation
- Three-phase process (independent, ranking, synthesis)
- Supports OpenAI, Anthropic, OpenRouter, Fireworks, and custom providers via env vars
- Persists transcripts to wiki pages
- Configurable model rosters and chairman selection
使用场景
- Gaining multiple AI perspectives for complex problems
- Building consensus on critical decisions or plan critiques
- Utilizing the 'LLM Council' approach for high-stakes reviews
- Creating persistent, searchable wiki pages from AI deliberations
非目标
- Acting as a direct replacement for individual LLM API calls without the deliberation framework
- Providing pre-defined AI perspectives without user-defined queries
- Managing LLM provider accounts or billing
先决条件
- LLM API keys configured via environment variables (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY)
- Node.js runtime
安装
请先添加 Marketplace
/plugin marketplace add rohitg00/pro-workflow/plugin install pro-workflow@pro-workflow质量评分
已验证类似扩展
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly
Lark Wiki CLI
100飞书知识库:管理知识空间、空间成员和文档节点。创建和查询知识空间、查看和管理空间成员、管理节点层级结构、在知识库中组织文档和快捷方式。当用户需要在知识库中查找或创建文档、浏览知识空间结构、查看或管理空间成员、移动或复制节点时使用。
Rag Architect
100Use when the user asks to design RAG pipelines, optimize retrieval strategies, choose embedding models, implement vector search, or build knowledge retrieval systems.
LLM Wiki
100Use when building or maintaining a persistent personal knowledge base (second brain) in Obsidian where an LLM incrementally ingests sources, updates entity/concept pages, maintains cross-references, and keeps a synthesis current. Triggers include "second brain", "Obsidian wiki", "personal knowledge management", "ingest this paper/article/book", "build a research wiki", "compound knowledge", "Memex", or whenever the user wants knowledge to accumulate across sessions instead of being re-derived by RAG on every query.
Recursive Research
100深入的递归研究,具有自我调节的循环,可达博士级别。适用于任何领域(科学、技术、商业、艺术、人文学科)。使用 WDM + Munger 反演进行自主决策、可靠来源分级和磁盘检查点以克服上下文限制。
Understand Knowledge
100分析 Karpathy 模式的 LLM Wiki 知识库,并生成一个交互式知识图谱,包含实体提取、隐含规则和主题聚类。