Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

Segment Anything Model

Skill Verifiziert Aktiv

Foundation model for image segmentation with zero-shot transfer. Use when you need to segment any object in images using points, boxes, or masks as prompts, or automatically generate all object masks in an image.

Zweck

To enable users to perform zero-shot image segmentation on any object in images using flexible prompts, or to automatically generate all object masks, without requiring task-specific training.

Funktionen

  • Zero-shot image segmentation
  • Flexible prompting (points, boxes, masks)
  • Automatic mask generation
  • Support for multiple model variants (ViT-B/L/H)
  • Clear installation and usage instructions

Anwendungsfälle

  • Segmenting any object in images without fine-tuning
  • Building interactive annotation tools
  • Generating training data for computer vision models
  • Processing specialized image domains (medical, satellite)

Nicht-Ziele

  • Real-time object detection with predefined classes (use YOLO/Detectron2)
  • Semantic/panoptic segmentation with categories (use Mask2Former)
  • Text-prompted segmentation (use GroundingDINO + SAM)
  • Video segmentation tasks (use SAM 2)

Installation

npx skills add davila7/claude-code-templates

Führt das Vercel skills CLI (skills.sh) via npx aus — benötigt Node.js lokal und mindestens einen installierten skills-kompatiblen Agent (Claude Code, Cursor, Codex, …). Setzt voraus, dass das Repo dem agentskills.io-Format folgt.

Qualitätspunktzahl

Verifiziert
95 /100
Analysiert about 18 hours ago

Vertrauenssignale

Letzter Commitabout 20 hours ago
Sterne27.2k
LizenzMIT
Status
Quellcode ansehen

Ähnliche Erweiterungen

Segment Anything Model

99

Foundation model for image segmentation with zero-shot transfer. Use when you need to segment any object in images using points, boxes, or masks as prompts, or automatically generate all object masks in an image.

Skill
Orchestra-Research

Transformers

98

This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.

Skill
K-Dense-AI

Senior Computer Vision

98

Computer vision engineering skill for object detection, image segmentation, and visual AI systems. Covers CNN and Vision Transformer architectures, YOLO/Faster R-CNN/DETR detection, Mask R-CNN/SAM segmentation, and production deployment with ONNX/TensorRT. Includes PyTorch, torchvision, Ultralytics, Detectron2, and MMDetection frameworks. Use when building detection pipelines, training custom models, optimizing inference, or deploying vision systems.

Skill
alirezarezvani

Clip

98

OpenAI's model connecting vision and language. Enables zero-shot image classification, image-text matching, and cross-modal retrieval. Trained on 400M image-text pairs. Use for image search, content moderation, or vision-language tasks without fine-tuning. Best for general-purpose image understanding.

Skill
Orchestra-Research

Blip 2 Vision Language

98

Vision-language pre-training framework bridging frozen image encoders and LLMs. Use when you need image captioning, visual question answering, image-text retrieval, or multimodal chat with state-of-the-art zero-shot performance.

Skill
Orchestra-Research

Stable Diffusion Image Generation

95

State-of-the-art text-to-image generation with Stable Diffusion models via HuggingFace Diffusers. Use when generating images from text prompts, performing image-to-image translation, inpainting, or building custom diffusion pipelines.

Skill
davila7