跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

Sentence Transformers

技能 已验证 活跃

Framework for state-of-the-art sentence, text, and image embeddings. Provides 5000+ pre-trained models for semantic similarity, clustering, and retrieval. Supports multilingual, domain-specific, and multimodal models. Use for generating embeddings for RAG, semantic search, or similarity tasks. Best for production embedding generation.

目的

To provide a comprehensive and performant framework for generating text and image embeddings, enabling advanced AI tasks like semantic search and RAG with a wide selection of pre-trained models.

功能

  • 5000+ pre-trained embedding models
  • Support for multilingual, domain-specific, and multimodal models
  • High-quality embeddings for RAG and semantic search
  • Local, production-ready embedding generation
  • Fast batch encoding and GPU acceleration support

使用场景

  • Generating embeddings for Retrieval Augmented Generation (RAG) systems
  • Performing semantic similarity searches on text corpora
  • Clustering documents based on semantic meaning
  • Building recommendation systems based on text similarity

非目标

  • Providing an API service for embeddings (focus is local generation)
  • End-to-end RAG pipeline implementation (focus is on the embedding component)
  • Training custom models from scratch (focus is on using pre-trained models)

Practical Utility

  • info:Edge casesWhile the skill provides extensive information on model selection and performance, it does not explicitly detail failure modes or recovery steps for scenarios like rate limiting or network issues.

Documentation

  • info:Configuration & parameter referenceWhile the usage examples show parameters for encoding and model loading, a comprehensive reference for all model parameters or configuration options is not explicitly detailed.

Compliance

  • info:GDPRThe skill processes text provided by the user. While it doesn't specifically operate on personal data, the output embeddings could indirectly represent it. No explicit sanitization is mentioned, but it's not submitting data to a third party.

Errors

  • info:Actionable error messagesThe library's Python exceptions are expected to be informative, but explicit documentation on error codes, causes, and remediation steps for end-users is not provided.

安装

请先添加 Marketplace

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

质量评分

已验证
94 /100
1 day ago 分析

信任信号

最近提交17 days ago
星标8.3k
许可证MIT
状态
查看源代码

类似扩展

Embedding Strategies

100

Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optimizing embedding quality for specific domains.

技能
wshobson

Segment Anything Model

99

Foundation model for image segmentation with zero-shot transfer. Use when you need to segment any object in images using points, boxes, or masks as prompts, or automatically generate all object masks in an image.

技能
Orchestra-Research

AgentDB Vector Search

99

Implement semantic vector search with AgentDB for intelligent document retrieval, similarity matching, and context-aware querying. Use when building RAG systems, semantic search engines, or intelligent knowledge bases.

技能
ruvnet

Azure Ai Contentunderstanding Py

99

Azure AI Content Understanding SDK for Python. Use for multimodal content extraction from documents, images, audio, and video. Triggers: "azure-ai-contentunderstanding", "ContentUnderstandingClient", "multimodal analysis", "document extraction", "video analysis", "audio transcription".

技能
microsoft

Transformers

98

This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.

技能
K-Dense-AI

Train Sentence Transformers

98

Train or fine-tune sentence-transformers models across `SentenceTransformer` (bi-encoder; dense or static embedding model; for retrieval, similarity, clustering, classification, paraphrase mining, dedup, multimodal), `CrossEncoder` (reranker; pair scoring for two-stage retrieval / pair classification), and `SparseEncoder` (SPLADE, sparse embedding model; for learned-sparse retrieval). Covers loss selection, hard-negative mining, evaluators, distillation, LoRA, Matryoshka, and Hugging Face Hub publishing. Use for any sentence-transformers training task.

技能
huggingface