Chat Format
Skill Verified ActiveFormat prompts for different LLM providers with chat templates and HNSW-powered context retrieval
To standardize prompt formatting across different LLM providers and enable efficient context retrieval for RAG applications.
Features
- Format prompts for multiple LLM providers
- Support for Anthropic, OpenAI, Gemini, and Ollama formats
- Integrate HNSW for context retrieval
- Create, add to, and route queries through HNSW indexes
- Check provider status
Use Cases
- Preparing prompts for different LLM APIs.
- Building RAG pipelines with vector-based context retrieval.
- Ensuring consistent prompt structure across diverse LLM backends.
Non-Goals
- General-purpose LLM inference outside of prompt formatting.
- Managing LLM provider API keys or authentication.
- Complex RAG pipeline orchestration beyond context retrieval.
Workflow
- Format chat messages for a target LLM provider.
- Create an HNSW index for context retrieval.
- Add documents to the HNSW index.
- Route a query to find relevant context using the HNSW index.
- Check the status of supported LLM providers.
Practices
- Prompt engineering
- LLM integration
- Vector databases
- Information retrieval
Installation
First, add the marketplace
/plugin marketplace add ruvnet/ruflo/plugin install ruflo-ruvllm@rufloQuality Score
VerifiedTrust Signals
Similar Extensions
Netlify AI Gateway
99Reference for Netlify AI Gateway — the managed proxy that routes calls to OpenAI, Anthropic, and Google Gemini SDKs without provider API keys. Use this skill any time the user wants to add AI on a Netlify site (chat, completion, reasoning, image generation, image-to-image edit/stylize), choose or change a model, wire up the OpenAI / Anthropic / @google/genai SDK, decide which provider to use for an image-gen feature (it's Gemini-only on the gateway), or debug "model not found" / "API key missing" against the gateway. Required reading before pinning a model — the gateway exposes a curated subset, not every provider model.
Oh My Claudecode
100Process-first advisor routing for Claude, Codex, or Gemini via `omc ask`, with artifact capture and no raw CLI assembly
Rag Architect
100Use when the user asks to design RAG pipelines, optimize retrieval strategies, choose embedding models, implement vector search, or build knowledge retrieval systems.
Product Self Knowledge
100Stop and consult this skill whenever your response would include specific facts about Anthropic's products. Covers: Claude Code (how to install, Node.js requirements, platform/OS support, MCP server integration, configuration), Claude API (function calling/tool use, batch processing, SDK usage, rate limits, pricing, models, streaming), and Claude.ai (Pro vs Team vs Enterprise plans, feature limits). Trigger this even for coding tasks that use the Anthropic SDK, content creation mentioning Claude capabilities or pricing, or LLM provider comparisons. Any time you would otherwise rely on memory for Anthropic product details, verify here instead — your training data may be outdated or wrong.
Geo Optimizer
100Generative Engine Optimization (GEO) — make content rank in AI search answers from ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews. Audits existing content, rewrites for AI citation, and produces per-engine strategy. Use when asked to "optimize for AI search", "rank in ChatGPT", "GEO audit", "improve AI citations", "rank in Perplexity", "AI Overview optimization", "AI Overview ranking", "LLM SEO", "answer engine optimization", "AEO", "get cited by AI", "GEO", "generative engine optimization", "show up in ChatGPT", "appear in AI answers", "be cited by Perplexity", "SGE optimization", "Search Generative Experience", or "make my content show up in AI answers". Distinct from regular SEO — this targets generative engines, not traditional Google rankings.
Wrap Up Ritual
100End-of-session ritual that audits changes, runs quality checks, captures learnings, and produces a session summary. Use when saying "wrap up", "done for the day", "finish coding", or ending a coding session.