Chat Format
Skill Verifiziert AktivFormat prompts for different LLM providers with chat templates and HNSW-powered context retrieval
To standardize prompt formatting across different LLM providers and enable efficient context retrieval for RAG applications.
Funktionen
- Format prompts for multiple LLM providers
- Support for Anthropic, OpenAI, Gemini, and Ollama formats
- Integrate HNSW for context retrieval
- Create, add to, and route queries through HNSW indexes
- Check provider status
Anwendungsfälle
- Preparing prompts for different LLM APIs.
- Building RAG pipelines with vector-based context retrieval.
- Ensuring consistent prompt structure across diverse LLM backends.
Nicht-Ziele
- General-purpose LLM inference outside of prompt formatting.
- Managing LLM provider API keys or authentication.
- Complex RAG pipeline orchestration beyond context retrieval.
Workflow
- Format chat messages for a target LLM provider.
- Create an HNSW index for context retrieval.
- Add documents to the HNSW index.
- Route a query to find relevant context using the HNSW index.
- Check the status of supported LLM providers.
Praktiken
- Prompt engineering
- LLM integration
- Vector databases
- Information retrieval
Installation
Zuerst Marketplace hinzufügen
/plugin marketplace add ruvnet/ruflo/plugin install ruflo-ruvllm@rufloQualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
Netlify AI Gateway
99Referenz für den Netlify AI Gateway – den verwalteten Proxy, der Aufrufe an die SDKs von OpenAI, Anthropic und Google Gemini ohne API-Schlüssel des Anbieters weiterleitet. Verwenden Sie diese Funktion, wann immer der Benutzer KI zu einer Netlify-Site hinzufügen möchte (Chat, Vervollständigung, Schlussfolgerung, Bilderzeugung, Bildbearbeitung/-stil, Modellauswahl oder -änderung, Verknüpfung des OpenAI / Anthropic / @google/genai SDKs, Auswahl des Anbieters für eine Bildgenerierungsfunktion (nur Gemini im Gateway) oder Debugging von "Modell nicht gefunden" / "API-Schlüssel fehlt" im Gateway. Erforderliche Lektüre vor dem Anpinnen eines Modells – das Gateway stellt eine kuratierte Untermenge bereit, nicht jedes Anbietermodell.
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly
Rag Architect
100Use when the user asks to design RAG pipelines, optimize retrieval strategies, choose embedding models, implement vector search, or build knowledge retrieval systems.
Product Self Knowledge
100Stop and consult this skill whenever your response would include specific facts about Anthropic's products. Covers: Claude Code (how to install, Node.js requirements, platform/OS support, MCP server integration, configuration), Claude API (function calling/tool use, batch processing, SDK usage, rate limits, pricing, models, streaming), and Claude.ai (Pro vs Team vs Enterprise plans, feature limits). Trigger this even for coding tasks that use the Anthropic SDK, content creation mentioning Claude capabilities or pricing, or LLM provider comparisons. Any time you would otherwise rely on memory for Anthropic product details, verify here instead — your training data may be outdated or wrong.
Geo Optimizer
100Generative Engine Optimization (GEO) — make content rank in AI search answers from ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews. Audits existing content, rewrites for AI citation, and produces per-engine strategy. Use when asked to "optimize for AI search", "rank in ChatGPT", "GEO audit", "improve AI citations", "rank in Perplexity", "AI Overview optimization", "AI Overview ranking", "LLM SEO", "answer engine optimization", "AEO", "get cited by AI", "GEO", "generative engine optimization", "show up in ChatGPT", "appear in AI answers", "be cited by Perplexity", "SGE optimization", "Search Generative Experience", or "make my content show up in AI answers". Distinct from regular SEO — this targets generative engines, not traditional Google rankings.
Wrap Up Ritual
100End-of-session ritual that audits changes, runs quality checks, captures learnings, and produces a session summary. Use when saying "wrap up", "done for the day", "finish coding", or ending a coding session.