LLM Council
Skill Verified ActiveProvider-agnostic multi-LLM deliberation. Three phases — independent responses, cross-model anonymized ranking, chairman synthesis. Provider config from env (OPENAI/ANTHROPIC/FIREWORKS/OPENROUTER/custom OpenAI-compatible base URL). Persists transcript to a wiki page when --wiki <slug> is passed. Use when the user wants multiple AI perspectives, consensus-building, or the "LLM Council" approach for high-stakes reviews, plan critique, or contested learning rules.
To enable users to leverage multiple AI perspectives for high-stakes reviews, consensus-building, or critical decision-making by facilitating a structured multi-LLM deliberation process.
Features
- Provider-agnostic multi-LLM deliberation
- Three-phase process (independent, ranking, synthesis)
- Supports OpenAI, Anthropic, OpenRouter, Fireworks, and custom providers via env vars
- Persists transcripts to wiki pages
- Configurable model rosters and chairman selection
Use Cases
- Gaining multiple AI perspectives for complex problems
- Building consensus on critical decisions or plan critiques
- Utilizing the 'LLM Council' approach for high-stakes reviews
- Creating persistent, searchable wiki pages from AI deliberations
Non-Goals
- Acting as a direct replacement for individual LLM API calls without the deliberation framework
- Providing pre-defined AI perspectives without user-defined queries
- Managing LLM provider accounts or billing
Prerequisites
- LLM API keys configured via environment variables (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY)
- Node.js runtime
Installation
First, add the marketplace
/plugin marketplace add rohitg00/pro-workflow/plugin install pro-workflow@pro-workflowQuality Score
VerifiedTrust Signals
Similar Extensions
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly
Lark Wiki CLI
100飞书知识库:管理知识空间、空间成员和文档节点。创建和查询知识空间、查看和管理空间成员、管理节点层级结构、在知识库中组织文档和快捷方式。当用户需要在知识库中查找或创建文档、浏览知识空间结构、查看或管理空间成员、移动或复制节点时使用。
Rag Architect
100Use when the user asks to design RAG pipelines, optimize retrieval strategies, choose embedding models, implement vector search, or build knowledge retrieval systems.
LLM Wiki
100Use when building or maintaining a persistent personal knowledge base (second brain) in Obsidian where an LLM incrementally ingests sources, updates entity/concept pages, maintains cross-references, and keeps a synthesis current. Triggers include "second brain", "Obsidian wiki", "personal knowledge management", "ingest this paper/article/book", "build a research wiki", "compound knowledge", "Memex", or whenever the user wants knowledge to accumulate across sessions instead of being re-derived by RAG on every query.
Recursive Research
100Investigación recursiva profunda con loop auto-regulado hasta nivel PhD. Aplicable a cualquier dominio (ciencia, tecnología, negocio, arte, humanidades). Usa WDM + Inversión Munger para decisiones autónomas, tiering de fuentes confiables, y checkpointing a disco para sobrevivir límites de contexto.
Understand Knowledge
100Analyze a Karpathy-pattern LLM wiki knowledge base and generate an interactive knowledge graph with entity extraction, implicit relationships, and topic clustering.