跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

Modal Serverless Gpu

技能 已验证 活跃

Serverless GPU cloud platform for running ML workloads. Use when you need on-demand GPU access without infrastructure management, deploying ML models as APIs, or running batch jobs with automatic scaling.

目的

To enable users to run ML workloads on-demand with GPU access without managing infrastructure, by leveraging Modal's serverless platform for deployment and batch processing.

功能

  • Serverless GPUs on-demand (T4, A10G, A100, H100, etc.)
  • Python-native infrastructure definition
  • Auto-scaling for ML workloads
  • Deploying ML models as REST APIs
  • Running batch processing jobs with automatic scaling

使用场景

  • Running GPU-intensive ML workloads without managing infrastructure
  • Deploying ML models as auto-scaling APIs
  • Running batch processing jobs (training, inference, data processing)
  • Prototyping ML applications quickly

非目标

  • Using alternatives like RunPod for longer-running pods with persistent state
  • Using Lambda Labs for reserved GPU instances
  • Using SkyPilot for multi-cloud orchestration and cost optimization
  • Using Kubernetes for complex multi-service architectures

安装

请先添加 Marketplace

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

质量评分

已验证
95 /100
1 day ago 分析

信任信号

最近提交17 days ago
星标8.3k
许可证MIT
状态
查看源代码

类似扩展

Cloudflare Deploy

99

Deploy applications and infrastructure to Cloudflare using Workers, Pages, and related platform services. Use when the user asks to deploy, host, publish, or set up a project on Cloudflare.

技能
openai

Render Deploy

99

Deploy applications to Render by analyzing codebases, generating render.yaml Blueprints, and providing Dashboard deeplinks. Use when the user wants to deploy, host, publish, or set up their application on Render's cloud platform.

技能
openai

Cost Optimization

98

Optimize cloud costs across AWS, Azure, GCP, and OCI through resource rightsizing, tagging strategies, reserved instances, and spending analysis. Use when reducing cloud expenses, analyzing infrastructure costs, or implementing cost governance policies.

技能
wshobson

Skypilot Multi Cloud Orchestration

98

Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds, leverage spot instances with auto-recovery, or optimize GPU costs across providers.

技能
Orchestra-Research

Modal Serverless Gpu

98

Serverless GPU cloud platform for running ML workloads. Use when you need on-demand GPU access without infrastructure management, deploying ML models as APIs, or running batch jobs with automatic scaling.

技能
davila7

RunPod Cloud GPU

98

通过 RunPod serverless 进行云 GPU 处理。在设置 RunPod 端点、部署 Docker 映像、管理 GPU 资源、排查端点问题或了解成本时使用。涵盖所有 5 个工具包映像(qwen-edit、realesrgan、propainter、sadtalker、qwen3-tts)。

技能
digitalsamba