Hybrid Search Implementation
技能 已验证 活跃Combine vector and keyword search for improved retrieval. Use when implementing RAG systems, building search engines, or when neither approach alone provides sufficient recall.
To enable developers to build more effective retrieval systems by combining the strengths of semantic vector search with exact keyword matching, improving recall and precision in RAG systems and search engines.
功能
- Hybrid search implementation patterns
- Vector and keyword search fusion (RRF, Linear)
- Cross-encoder reranking for quality improvement
- Code templates for Python, PostgreSQL, and Elasticsearch
- Guidance on best practices for hybrid search tuning
使用场景
- Implementing RAG systems with improved recall
- Building domain-specific search engines
- Handling queries requiring exact term matching alongside semantic understanding
- Enhancing search accuracy for technical vocabulary or codes
非目标
- Providing a managed search service
- Abstracting away all database-specific syntax
- Implementing full-text search indexing itself (relies on existing DB features)
安装
请先添加 Marketplace
/plugin marketplace add wshobson/agents/plugin install llm-application-dev@claude-code-workflows质量评分
已验证类似扩展
Embedding Strategies
100Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optimizing embedding quality for specific domains.
Rag Architect
100Use when the user asks to design RAG pipelines, optimize retrieval strategies, choose embedding models, implement vector search, or build knowledge retrieval systems.
LangChain
99Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.
Embeddings
99Vector embeddings with HNSW indexing, sql.js persistence, and hyperbolic support. 75x faster with agentic-flow integration. Use when: semantic search, pattern matching, similarity queries, knowledge retrieval. Skip when: exact text matching, simple lookups, no semantic understanding needed.
Rag Implementation
98Build Retrieval-Augmented Generation (RAG) systems for LLM applications with vector databases and semantic search. Use when implementing knowledge-grounded AI, building document Q&A systems, or integrating LLMs with external knowledge bases.
Rag Engineer
87Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LLM applications. Use when: building RAG, vector search, embeddings, semantic search, document retrieval.