SwanLab Experiment Tracking
技能 已验证 活跃Provides guidance for experiment tracking with SwanLab. Use when you need open-source run tracking, local or self-hosted dashboards, and lightweight media logging for ML workflows.
Enables users to track and visualize their machine learning experiments efficiently using SwanLab, supporting local, self-hosted, or cloud deployments.
功能
- Open-source ML experiment tracking
- Local or self-hosted dashboards
- Lightweight media logging (images, audio, text)
- Integration with PyTorch, Transformers, etc.
- Comparison of multiple experiment runs
使用场景
- Tracking ML experiments with metrics and configurations
- Visualizing training progress with scalar charts and logged media
- Comparing different experimental runs across seeds or hyperparameters
- Working with local or self-hosted dashboards instead of managed SaaS
非目标
- Replacing core ML framework functionalities
- Providing advanced hyperparameter optimization algorithms
- Managing cloud infrastructure for training
工作流
- Initialize SwanLab run with project, experiment name, and configuration.
- Execute ML training or evaluation loop.
- Log metrics (loss, accuracy, etc.) and media (images, audio) at intervals.
- Optionally integrate with ML frameworks (PyTorch, Transformers, etc.).
- Finish the run and optionally view local logs with `swanlab watch`.
实践
- Experiment Tracking
- MLOps
- Data Visualization
先决条件
- Python 3.7+
- pip install swanlab>=0.7.11
- pillow>=9.0.0
- soundfile>=0.12.0
安装
npx skills add Orchestra-Research/AI-Research-SKILLs通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。
质量评分
已验证类似扩展
Pytorch Lightning
99High-level PyTorch framework with Trainer class, automatic distributed training (DDP/FSDP/DeepSpeed), callbacks system, and minimal boilerplate. Scales from laptop to supercomputer with same code. Use when you want clean training loops with built-in best practices.
Huggingface Accelerate
99Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.
TensorBoard
98Visualize training metrics, debug models with histograms, compare experiments, visualize model graphs, and profile performance with TensorBoard - Google's ML visualization toolkit
Hf Cli
100Hugging Face Hub CLI (`hf`) for downloading, uploading, and managing models, datasets, spaces, buckets, repos, papers, jobs, and more on the Hugging Face Hub. Use when: handling authentication; managing local cache; managing Hugging Face Buckets; running or scheduling jobs on Hugging Face infrastructure; managing Hugging Face repos; discussions and pull requests; browsing models, datasets and spaces; reading, searching, or browsing academic papers; managing collections; querying datasets; configuring spaces; setting up webhooks; or deploying and managing HF Inference Endpoints. Make sure to use this skill whenever the user mentions 'hf', 'huggingface', 'Hugging Face', 'huggingface-cli', or 'hugging face cli', or wants to do anything related to the Hugging Face ecosystem and to AI and ML in general. Also use for cloud storage needs like training checkpoints, data pipelines, or agent traces. Use even if the user doesn't explicitly ask for a CLI command. Replaces the deprecated `huggingface-cli`.
Arize Experiment
100Creates, runs, and analyzes Arize experiments for evaluating and comparing model performance. Covers experiment CRUD, exporting runs, comparing results, and evaluation workflows using the ax CLI. Use when the user mentions create experiment, run experiment, compare models, model performance, evaluate AI, experiment results, benchmark, A/B test models, or measure accuracy.
Arize Evaluator
100Handles LLM-as-judge evaluation workflows on Arize including creating/updating evaluators, running evaluations on spans or experiments, managing tasks, trigger-run operations, column mapping, and continuous monitoring. Use when the user mentions create evaluator, LLM judge, hallucination, faithfulness, correctness, relevance, run eval, score spans, score experiment, trigger-run, column mapping, continuous monitoring, or improve evaluator prompt.