Rag Implementation
Skill Verified ActiveBuild Retrieval-Augmented Generation (RAG) systems for LLM applications with vector databases and semantic search. Use when implementing knowledge-grounded AI, building document Q&A systems, or integrating LLMs with external knowledge bases.
Build sophisticated Retrieval-Augmented Generation (RAG) systems for LLM applications, enabling knowledge-grounded AI with vector databases and semantic search.
Features
- Build RAG systems
- Integrate vector databases
- Implement semantic search
- Reduce LLM hallucinations
- Use various embedding models
Use Cases
- Building Q&A systems over proprietary documents
- Creating chatbots with current, factual information
- Implementing semantic search with natural language queries
- Enabling LLMs to access domain-specific knowledge
Non-Goals
- This skill does not provide a ready-to-run RAG system, but rather guidance and implementation patterns.
- It does not abstract away the complexities of vector database management or embedding model selection entirely.
Installation
First, add the marketplace
/plugin marketplace add wshobson/agents/plugin install llm-application-dev@claude-code-workflowsQuality Score
VerifiedTrust Signals
Similar Extensions
Embedding Strategies
100Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optimizing embedding quality for specific domains.
Chat Format
100Format prompts for different LLM providers with chat templates and HNSW-powered context retrieval
Rag Architect
100Use when the user asks to design RAG pipelines, optimize retrieval strategies, choose embedding models, implement vector search, or build knowledge retrieval systems.
LangChain
99Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.
Langchain Framework
99LangChain LLM application framework with chains, agents, RAG, and memory for building AI-powered applications
Hybrid Search Implementation
98Combine vector and keyword search for improved retrieval. Use when implementing RAG systems, building search engines, or when neither approach alone provides sufficient recall.