Sentence Transformers
Skill Verifiziert AktivFramework for state-of-the-art sentence, text, and image embeddings. Provides 5000+ pre-trained models for semantic similarity, clustering, and retrieval. Supports multilingual, domain-specific, and multimodal models. Use for generating embeddings for RAG, semantic search, or similarity tasks. Best for production embedding generation.
To provide a comprehensive and performant framework for generating text and image embeddings, enabling advanced AI tasks like semantic search and RAG with a wide selection of pre-trained models.
Funktionen
- 5000+ pre-trained embedding models
- Support for multilingual, domain-specific, and multimodal models
- High-quality embeddings for RAG and semantic search
- Local, production-ready embedding generation
- Fast batch encoding and GPU acceleration support
Anwendungsfälle
- Generating embeddings for Retrieval Augmented Generation (RAG) systems
- Performing semantic similarity searches on text corpora
- Clustering documents based on semantic meaning
- Building recommendation systems based on text similarity
Nicht-Ziele
- Providing an API service for embeddings (focus is local generation)
- End-to-end RAG pipeline implementation (focus is on the embedding component)
- Training custom models from scratch (focus is on using pre-trained models)
Practical Utility
- info:Edge casesWhile the skill provides extensive information on model selection and performance, it does not explicitly detail failure modes or recovery steps for scenarios like rate limiting or network issues.
Documentation
- info:Configuration & parameter referenceWhile the usage examples show parameters for encoding and model loading, a comprehensive reference for all model parameters or configuration options is not explicitly detailed.
Compliance
- info:GDPRThe skill processes text provided by the user. While it doesn't specifically operate on personal data, the output embeddings could indirectly represent it. No explicit sanitization is mentioned, but it's not submitting data to a third party.
Errors
- info:Actionable error messagesThe library's Python exceptions are expected to be informative, but explicit documentation on error codes, causes, and remediation steps for end-users is not provided.
Installation
Zuerst Marketplace hinzufügen
/plugin marketplace add Orchestra-Research/AI-Research-SKILLs/plugin install AI-Research-SKILLs@ai-research-skillsQualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
Embedding Strategies
100Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optimizing embedding quality for specific domains.
Segment Anything Model
99Foundation model for image segmentation with zero-shot transfer. Use when you need to segment any object in images using points, boxes, or masks as prompts, or automatically generate all object masks in an image.
AgentDB Vector Search
99Implement semantic vector search with AgentDB for intelligent document retrieval, similarity matching, and context-aware querying. Use when building RAG systems, semantic search engines, or intelligent knowledge bases.
Azure Ai Contentunderstanding Py
99Azure AI Content Understanding SDK for Python. Use for multimodal content extraction from documents, images, audio, and video. Triggers: "azure-ai-contentunderstanding", "ContentUnderstandingClient", "multimodal analysis", "document extraction", "video analysis", "audio transcription".
Transformers
98This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.
Train Sentence Transformers
98Train or fine-tune sentence-transformers models across `SentenceTransformer` (bi-encoder; dense or static embedding model; for retrieval, similarity, clustering, classification, paraphrase mining, dedup, multimodal), `CrossEncoder` (reranker; pair scoring for two-stage retrieval / pair classification), and `SparseEncoder` (SPLADE, sparse embedding model; for learned-sparse retrieval). Covers loss selection, hard-negative mining, evaluators, distillation, LoRA, Matryoshka, and Hugging Face Hub publishing. Use for any sentence-transformers training task.