Lambda Labs Gpu Cloud
技能 活跃Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persistent filesystems, or high-performance multi-node clusters for large-scale training.
To enable users to easily provision and manage dedicated GPU instances on Lambda Labs for ML training and inference, offering simple SSH access, persistent filesystems, and high-performance multi-node clusters.
功能
- Provisioning GPU instances (B200, H100, A100, etc.)
- SSH access and instance management
- Persistent filesystem integration
- High-performance multi-node cluster setup
- Comprehensive troubleshooting and usage guides
使用场景
- Running long ML training jobs on dedicated GPUs
- Setting up persistent storage for datasets and models
- Deploying large-scale multi-GPU training clusters
- Leveraging pre-installed ML stacks (Lambda Stack)
非目标
- Serverless or auto-scaling ML workloads (use Modal)
- Multi-cloud orchestration or cost optimization (use SkyPilot)
- Cheap spot instances or serverless endpoints (use RunPod)
- Lowest-price GPU marketplace (use Vast.ai)
Trust
- warning:Issues AttentionIn the last 90 days, 17 issues were opened and 4 were closed, indicating a low closure rate and potentially slow maintainer response.
安装
npx skills add davila7/claude-code-templates通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。
质量评分
类似扩展
Lambda Labs GPU Cloud
97Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persistent filesystems, or high-performance multi-node clusters for large-scale training.
Cloudflare Deploy
99Deploy applications and infrastructure to Cloudflare using Workers, Pages, and related platform services. Use when the user asks to deploy, host, publish, or set up a project on Cloudflare.
Cost Optimization
98Optimize cloud costs across AWS, Azure, GCP, and OCI through resource rightsizing, tagging strategies, reserved instances, and spending analysis. Use when reducing cloud expenses, analyzing infrastructure costs, or implementing cost governance policies.
Skypilot Multi Cloud Orchestration
98Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds, leverage spot instances with auto-recovery, or optimize GPU costs across providers.
Alterlab Modal
98Part of the AlterLab Academic Skills suite. Run Python code in the cloud with serverless containers, GPUs, and autoscaling. Use when deploying ML models, running batch processing jobs, scheduling compute-intensive tasks, or serving APIs that require GPU acceleration or dynamic scaling.
AWQ Quantization
95Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.