AlterLab Zarr
技能 已验证 活跃Part of the AlterLab Academic Skills suite. Chunked N-D arrays for cloud storage. Compressed arrays, parallel I/O, S3/GCS integration, NumPy/Dask/Xarray compatible, for large-scale scientific computing pipelines.
To enable efficient, parallel I/O and cloud-native workflows for large-scale scientific data by leveraging the Zarr library.
功能
- Chunked N-D array storage
- Compression and parallel I/O
- S3/GCS cloud storage integration
- NumPy, Dask, Xarray compatibility
- Efficient large-scale scientific computing
使用场景
- Storing and accessing large scientific datasets in cloud environments.
- Processing datasets larger than available RAM using Dask.
- Integrating Zarr arrays into existing scientific analysis workflows.
- Optimizing data storage and retrieval for high-performance computing.
非目标
- Providing a direct interface to cloud storage services beyond Zarr's integration.
- Replacing core data science libraries like NumPy, Dask, or Xarray.
- Handling real-time streaming data without explicit Dask integration.
先决条件
- Python 3.11+
- uv pip (recommended)
- zarr library
- s3fs (for S3)
- gcsfs (for GCS)
安装
npx skills add AlterLab-IEU/AlterLab-Academic-Skills通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。
质量评分
已验证类似扩展
Zarr Python
97Chunked N-D arrays for cloud storage. Compressed arrays, parallel I/O, S3/GCS integration, NumPy/Dask/Xarray compatible, for large-scale scientific computing pipelines.
Dask Data Science
99Part of the AlterLab Academic Skills suite. Distributed computing for larger-than-RAM pandas/NumPy workflows. Use when you need to scale existing pandas/NumPy code beyond memory or across clusters. Best for parallel file processing, distributed ML, integration with existing pandas code. For out-of-core analytics on single machine use vaex; for in-memory speed use polars.
Optimize for GPU
97GPU-accelerate Python code using CuPy, Numba CUDA, Warp, cuDF, cuML, cuGraph, KvikIO, cuCIM, cuxfilter, cuVS, cuSpatial, and RAFT. Use whenever the user mentions GPU/CUDA/NVIDIA acceleration, or wants to speed up NumPy, pandas, scikit-learn, scikit-image, NetworkX, GeoPandas, or Faiss workloads. Covers physics simulation, differentiable rendering, mesh ray casting, particle systems (DEM/SPH/fluids), vector/similarity search, GPUDirect Storage file IO, interactive dashboards, geospatial analysis, medical imaging, and sparse eigensolvers. Also use when you see CPU-bound Python code (loops, large arrays, ML pipelines, graph analytics, image processing) that would benefit from GPU acceleration, even if not explicitly requested.
OraClaw Forecast
100AI 代理的时间序列预测。ARIMA 和 Holt-Winters 预测(含置信区间)。预测收入、流量、价格或任何序列数据。推理延迟低于 5 毫秒。
SHAP Model Interpretability
100Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.
Arize Evaluator
100Handles LLM-as-judge evaluation workflows on Arize including creating/updating evaluators, running evaluations on spans or experiments, managing tasks, trigger-run operations, column mapping, and continuous monitoring. Use when the user mentions create evaluator, LLM judge, hallucination, faithfulness, correctness, relevance, run eval, score spans, score experiment, trigger-run, column mapping, continuous monitoring, or improve evaluator prompt.