AlterLab Polars
Skill ActivePart of the AlterLab Academic Skills suite. Fast in-memory DataFrame library for datasets that fit in RAM. Use when pandas is too slow but data still fits in memory. Lazy evaluation, parallel execution, Apache Arrow backend. Best for 1-100GB datasets, ETL pipelines, faster pandas replacement. For larger-than-RAM data use dask or vaex.
To offer a significantly faster in-memory DataFrame library for datasets that fit within RAM, serving as a high-performance alternative to pandas for ETL pipelines and data analysis.
Features
- Fast in-memory DataFrame processing
- Lazy evaluation for query optimization
- Parallel execution for multi-core performance
- Apache Arrow backend for efficient data handling
- Expressive API for complex data transformations
Use Cases
- Replacing slow pandas operations for datasets fitting in RAM
- Optimizing ETL pipelines for faster data processing
- Performing complex data analysis and transformations efficiently
- Working with datasets in the 1-100GB range
Non-Goals
- Handling datasets larger than available RAM (use dask or vaex)
- Replacing specialized database connectors
- Providing a full-fledged analytics platform beyond DataFrame operations
Trust
- warning:Issues AttentionThere are 2 open issues and 0 closed issues in the last 90 days, indicating a very low closure rate and slow response to new issues.
Installation
npx skills add AlterLab-IEU/AlterLab-Academic-SkillsRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
Trust Signals
Similar Extensions
OraClaw Forecast
100Time series forecasting for AI agents. ARIMA and Holt-Winters predictions with confidence intervals. Predict revenue, traffic, prices, or any sequential data. Sub-5ms inference.
Polars
99Fast in-memory DataFrame library for datasets that fit in RAM. Use when pandas is too slow but data still fits in memory. Lazy evaluation, parallel execution, Apache Arrow backend. Best for 1-100GB datasets, ETL pipelines, faster pandas replacement. For larger-than-RAM data use dask or vaex.
Chdb Datastore
95Drop-in pandas replacement with ClickHouse performance. Use `import chdb.datastore as pd` (or `from datastore import DataStore`) and write standard pandas code — same API, 10-100x faster on large datasets. Supports 16+ data sources (MySQL, PostgreSQL, S3, MongoDB, ClickHouse, Iceberg, Delta Lake, etc.) and 10+ file formats (Parquet, CSV, JSON, Arrow, ORC, etc.) with cross-source joins. Use this skill when the user wants to analyze data with pandas-style syntax, speed up slow pandas code, query remote databases or cloud storage as DataFrames, or join data across different sources — even if they don't explicitly mention chdb or DataStore. Do NOT use for raw SQL queries, ClickHouse server administration, or non-Python languages.
Measure Dashboard Requirements
100Specifies requirements for an analytics dashboard including metrics, visualizations, filters, and data sources. Use when requesting dashboards from data teams, defining KPI tracking, or documenting reporting needs.
Meta Observer
100Track skill performance and emerging patterns
Market Movers
100When the user wants to track App Store chart rank changes, find top gainers and losers, detect breakout apps entering the top 100, or identify apps dropping out of charts. Also use when the user mentions "chart movers", "rank changes", "who's rising", "who's falling", "new chart entries", "top gainers", or "market shifts". For broader market overview, see market-pulse. For competitive keyword analysis, see competitor-analysis.