Weights And Biases
Skill ActiveTrack ML experiments with automatic logging, visualize training in real-time, optimize hyperparameters with sweeps, and manage model registry with W&B - collaborative MLOps platform
To streamline and automate the process of tracking, visualizing, and optimizing machine learning experiments using the Weights & Biases platform.
Features
- Automatic ML experiment tracking
- Real-time training visualization
- Hyperparameter optimization with sweeps
- Model registry management
- Data and model versioning with artifacts
Use Cases
- Use when you need to log metrics and track configurations for ML experiments.
- Use to visualize model training progress and compare different runs.
- Use to automate hyperparameter tuning to find optimal model configurations.
- Use to manage and version ML models and datasets throughout the MLOps lifecycle.
Non-Goals
- This skill does not replace the core functionality of the Weights & Biases platform itself.
- It does not provide generic ML model training or data preprocessing capabilities outside of W&B integration.
Trust
- warning:Issues AttentionIn the last 90 days, 17 issues were opened and 4 were closed, indicating a low closure rate and potential maintainer responsiveness issues.
Execution
- info:Pinned dependenciesThe skill relies on the 'wandb' package. While the SKILL.md specifies it as a dependency, a lockfile or specific version pinning is not explicitly shown for the 'wandb' package itself within the skill's bundle.
Installation
npx skills add davila7/claude-code-templatesRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
Trust Signals
Similar Extensions
Weights And Biases
96Track ML experiments with automatic logging, visualize training in real-time, optimize hyperparameters with sweeps, and manage model registry with W&B - collaborative MLOps platform
Mlflow
98Track ML experiments, manage model registry with versioning, deploy models to production, and reproduce experiments with MLflow - framework-agnostic ML lifecycle platform
MLflow
96Track ML experiments, manage model registry with versioning, deploy models to production, and reproduce experiments with MLflow - framework-agnostic ML lifecycle platform
Hf Cli
100Hugging Face Hub CLI (`hf`) for downloading, uploading, and managing models, datasets, spaces, buckets, repos, papers, jobs, and more on the Hugging Face Hub. Use when: handling authentication; managing local cache; managing Hugging Face Buckets; running or scheduling jobs on Hugging Face infrastructure; managing Hugging Face repos; discussions and pull requests; browsing models, datasets and spaces; reading, searching, or browsing academic papers; managing collections; querying datasets; configuring spaces; setting up webhooks; or deploying and managing HF Inference Endpoints. Make sure to use this skill whenever the user mentions 'hf', 'huggingface', 'Hugging Face', 'huggingface-cli', or 'hugging face cli', or wants to do anything related to the Hugging Face ecosystem and to AI and ML in general. Also use for cloud storage needs like training checkpoints, data pipelines, or agent traces. Use even if the user doesn't explicitly ask for a CLI command. Replaces the deprecated `huggingface-cli`.
Arize Experiment
100Creates, runs, and analyzes Arize experiments for evaluating and comparing model performance. Covers experiment CRUD, exporting runs, comparing results, and evaluation workflows using the ax CLI. Use when the user mentions create experiment, run experiment, compare models, model performance, evaluate AI, experiment results, benchmark, A/B test models, or measure accuracy.
Arize Evaluator
100Handles LLM-as-judge evaluation workflows on Arize including creating/updating evaluators, running evaluations on spans or experiments, managing tasks, trigger-run operations, column mapping, and continuous monitoring. Use when the user mentions create evaluator, LLM judge, hallucination, faithfulness, correctness, relevance, run eval, score spans, score experiment, trigger-run, column mapping, continuous monitoring, or improve evaluator prompt.