Sentence Transformers
Skill Verified ActiveFramework for state-of-the-art sentence, text, and image embeddings. Provides 5000+ pre-trained models for semantic similarity, clustering, and retrieval. Supports multilingual, domain-specific, and multimodal models. Use for generating embeddings for RAG, semantic search, or similarity tasks. Best for production embedding generation.
Generate high-quality, production-ready embeddings for semantic similarity, RAG, and search tasks using a wide variety of pre-trained models, offering a cost-effective alternative to API-based solutions.
Features
- State-of-the-art embedding generation
- 5000+ pre-trained models
- Support for multilingual and multimodal models
- Local embedding generation (no API)
- Integration with LangChain and LlamaIndex
Use Cases
- Generating embeddings for Retrieval Augmented Generation (RAG)
- Performing semantic search and similarity tasks
- Clustering and classification of text data
- Building multilingual search applications
Non-Goals
- Providing an API-based embedding service
- Task-specific instruction-based embeddings (like Instructor)
- Managed embedding services (like Cohere Embed)
Trust
- info:Issues AttentionIn the last 90 days, 17 issues were opened and 4 were closed, indicating slower response times for issue resolution.
Practical Utility
- info:Edge casesWhile happy paths are covered, explicit documentation of failure modes (e.g., model loading errors, out-of-memory) with recovery steps is minimal.
Installation
npx skills add davila7/claude-code-templatesRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
VerifiedTrust Signals
Similar Extensions
Embedding Strategies
100Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optimizing embedding quality for specific domains.
TimesFM Forecasting
100Zero-shot time series forecasting with Google's TimesFM foundation model. Use for any univariate time series (sales, sensors, energy, vitals, weather) without training a custom model. Supports CSV/DataFrame/array inputs with point forecasts and prediction intervals. Includes a preflight system checker script to verify RAM/GPU before first use.
AgentDB Vector Search
99Implement semantic vector search with AgentDB for intelligent document retrieval, similarity matching, and context-aware querying. Use when building RAG systems, semantic search engines, or intelligent knowledge bases.
Geniml
99This skill should be used when working with genomic interval data (BED files) for machine learning tasks. Use for training region embeddings (Region2Vec, BEDspace), single-cell ATAC-seq analysis (scEmbed), building consensus peaks (universes), or any ML-based analysis of genomic regions. Applies to BED file collections, scATAC-seq data, chromatin accessibility datasets, and region-based genomic feature learning.
Chroma
98Open-source embedding database for AI applications. Store embeddings and metadata, perform vector and full-text search, filter by metadata. Simple 4-function API. Scales from notebooks to production clusters. Use for semantic search, RAG applications, or document retrieval. Best for local development and open-source projects.
Train Sentence Transformers
98Train or fine-tune sentence-transformers models across `SentenceTransformer` (bi-encoder; dense or static embedding model; for retrieval, similarity, clustering, classification, paraphrase mining, dedup, multimodal), `CrossEncoder` (reranker; pair scoring for two-stage retrieval / pair classification), and `SparseEncoder` (SPLADE, sparse embedding model; for learned-sparse retrieval). Covers loss selection, hard-negative mining, evaluators, distillation, LoRA, Matryoshka, and Hugging Face Hub publishing. Use for any sentence-transformers training task.