Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

Lambda Labs Gpu Cloud

Skill Aktiv

Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persistent filesystems, or high-performance multi-node clusters for large-scale training.

Zweck

To enable users to easily provision and manage dedicated GPU instances on Lambda Labs for ML training and inference, offering simple SSH access, persistent filesystems, and high-performance multi-node clusters.

Funktionen

  • Provisioning GPU instances (B200, H100, A100, etc.)
  • SSH access and instance management
  • Persistent filesystem integration
  • High-performance multi-node cluster setup
  • Comprehensive troubleshooting and usage guides

Anwendungsfälle

  • Running long ML training jobs on dedicated GPUs
  • Setting up persistent storage for datasets and models
  • Deploying large-scale multi-GPU training clusters
  • Leveraging pre-installed ML stacks (Lambda Stack)

Nicht-Ziele

  • Serverless or auto-scaling ML workloads (use Modal)
  • Multi-cloud orchestration or cost optimization (use SkyPilot)
  • Cheap spot instances or serverless endpoints (use RunPod)
  • Lowest-price GPU marketplace (use Vast.ai)

Trust

  • warning:Issues AttentionIn the last 90 days, 17 issues were opened and 4 were closed, indicating a low closure rate and potentially slow maintainer response.

Installation

npx skills add davila7/claude-code-templates

Führt das Vercel skills CLI (skills.sh) via npx aus — benötigt Node.js lokal und mindestens einen installierten skills-kompatiblen Agent (Claude Code, Cursor, Codex, …). Setzt voraus, dass das Repo dem agentskills.io-Format folgt.

Qualitätspunktzahl

94 /100
Analysiert 1 day ago

Vertrauenssignale

Letzter Commit1 day ago
Sterne27.2k
LizenzMIT
Status
Quellcode ansehen

Ähnliche Erweiterungen

Lambda Labs GPU Cloud

97

Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persistent filesystems, or high-performance multi-node clusters for large-scale training.

Skill
Orchestra-Research

Cloudflare Deploy

99

Deploy applications and infrastructure to Cloudflare using Workers, Pages, and related platform services. Use when the user asks to deploy, host, publish, or set up a project on Cloudflare.

Skill
openai

Cost Optimization

98

Optimize cloud costs across AWS, Azure, GCP, and OCI through resource rightsizing, tagging strategies, reserved instances, and spending analysis. Use when reducing cloud expenses, analyzing infrastructure costs, or implementing cost governance policies.

Skill
wshobson

Skypilot Multi Cloud Orchestration

98

Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds, leverage spot instances with auto-recovery, or optimize GPU costs across providers.

Skill
Orchestra-Research

Alterlab Modal

98

Part of the AlterLab Academic Skills suite. Run Python code in the cloud with serverless containers, GPUs, and autoscaling. Use when deploying ML models, running batch processing jobs, scheduling compute-intensive tasks, or serving APIs that require GPU acceleration or dynamic scaling.

Skill
AlterLab-IEU

AWQ Quantization

95

Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.

Skill
Orchestra-Research