LLM Council
Skill Verifiziert AktivProvider-agnostic multi-LLM deliberation. Three phases — independent responses, cross-model anonymized ranking, chairman synthesis. Provider config from env (OPENAI/ANTHROPIC/FIREWORKS/OPENROUTER/custom OpenAI-compatible base URL). Persists transcript to a wiki page when --wiki <slug> is passed. Use when the user wants multiple AI perspectives, consensus-building, or the "LLM Council" approach for high-stakes reviews, plan critique, or contested learning rules.
To enable users to leverage multiple AI perspectives for high-stakes reviews, consensus-building, or critical decision-making by facilitating a structured multi-LLM deliberation process.
Funktionen
- Provider-agnostic multi-LLM deliberation
- Three-phase process (independent, ranking, synthesis)
- Supports OpenAI, Anthropic, OpenRouter, Fireworks, and custom providers via env vars
- Persists transcripts to wiki pages
- Configurable model rosters and chairman selection
Anwendungsfälle
- Gaining multiple AI perspectives for complex problems
- Building consensus on critical decisions or plan critiques
- Utilizing the 'LLM Council' approach for high-stakes reviews
- Creating persistent, searchable wiki pages from AI deliberations
Nicht-Ziele
- Acting as a direct replacement for individual LLM API calls without the deliberation framework
- Providing pre-defined AI perspectives without user-defined queries
- Managing LLM provider accounts or billing
Voraussetzungen
- LLM API keys configured via environment variables (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY)
- Node.js runtime
Installation
Zuerst Marketplace hinzufügen
/plugin marketplace add rohitg00/pro-workflow/plugin install pro-workflow@pro-workflowQualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly
Lark Wiki CLI
100飞书知识库:管理知识空间、空间成员和文档节点。创建和查询知识空间、查看和管理空间成员、管理节点层级结构、在知识库中组织文档和快捷方式。当用户需要在知识库中查找或创建文档、浏览知识空间结构、查看或管理空间成员、移动或复制节点时使用。
Rag Architect
100Use when the user asks to design RAG pipelines, optimize retrieval strategies, choose embedding models, implement vector search, or build knowledge retrieval systems.
LLM Wiki
100Use when building or maintaining a persistent personal knowledge base (second brain) in Obsidian where an LLM incrementally ingests sources, updates entity/concept pages, maintains cross-references, and keeps a synthesis current. Triggers include "second brain", "Obsidian wiki", "personal knowledge management", "ingest this paper/article/book", "build a research wiki", "compound knowledge", "Memex", or whenever the user wants knowledge to accumulate across sessions instead of being re-derived by RAG on every query.
Recursive Research
100Tiefgehende rekursive Recherche mit selbstregulierender Schleife bis PhD-Niveau. Anwendbar auf jedes Gebiet (Wissenschaft, Technologie, Wirtschaft, Kunst, Geisteswissenschaften). Nutzt WDM + Munger-Inversion für autonome Entscheidungen, Stufung vertrauenswürdiger Quellen und Speicherung von Checkpoints auf der Festplatte, um Kontextgrenzen zu überwinden.
Understand Knowledge
100Analysieren Sie eine LLM-Wiki-Wissensdatenbank nach dem Karpathy-Muster und generieren Sie einen interaktiven Wissensgraphen mit Entitätsextraktion, impliziten Beziehungen und Themenclustern.