跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

Lambda Labs Gpu Cloud

技能 活跃

Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persistent filesystems, or high-performance multi-node clusters for large-scale training.

目的

To enable users to easily provision and manage dedicated GPU instances on Lambda Labs for ML training and inference, offering simple SSH access, persistent filesystems, and high-performance multi-node clusters.

功能

  • Provisioning GPU instances (B200, H100, A100, etc.)
  • SSH access and instance management
  • Persistent filesystem integration
  • High-performance multi-node cluster setup
  • Comprehensive troubleshooting and usage guides

使用场景

  • Running long ML training jobs on dedicated GPUs
  • Setting up persistent storage for datasets and models
  • Deploying large-scale multi-GPU training clusters
  • Leveraging pre-installed ML stacks (Lambda Stack)

非目标

  • Serverless or auto-scaling ML workloads (use Modal)
  • Multi-cloud orchestration or cost optimization (use SkyPilot)
  • Cheap spot instances or serverless endpoints (use RunPod)
  • Lowest-price GPU marketplace (use Vast.ai)

Trust

  • warning:Issues AttentionIn the last 90 days, 17 issues were opened and 4 were closed, indicating a low closure rate and potentially slow maintainer response.

安装

npx skills add davila7/claude-code-templates

通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。

质量评分

94 /100
1 day ago 分析

信任信号

最近提交1 day ago
星标27.2k
许可证MIT
状态
查看源代码

类似扩展

Lambda Labs GPU Cloud

97

Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persistent filesystems, or high-performance multi-node clusters for large-scale training.

技能
Orchestra-Research

Cloudflare Deploy

99

Deploy applications and infrastructure to Cloudflare using Workers, Pages, and related platform services. Use when the user asks to deploy, host, publish, or set up a project on Cloudflare.

技能
openai

Cost Optimization

98

Optimize cloud costs across AWS, Azure, GCP, and OCI through resource rightsizing, tagging strategies, reserved instances, and spending analysis. Use when reducing cloud expenses, analyzing infrastructure costs, or implementing cost governance policies.

技能
wshobson

Skypilot Multi Cloud Orchestration

98

Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds, leverage spot instances with auto-recovery, or optimize GPU costs across providers.

技能
Orchestra-Research

Alterlab Modal

98

Part of the AlterLab Academic Skills suite. Run Python code in the cloud with serverless containers, GPUs, and autoscaling. Use when deploying ML models, running batch processing jobs, scheduling compute-intensive tasks, or serving APIs that require GPU acceleration or dynamic scaling.

技能
AlterLab-IEU

AWQ Quantization

95

Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.

技能
Orchestra-Research