Skip to main content

Data Engineer

Skill Active

Build scalable data pipelines, modern data warehouses, and real-time streaming architectures. Implements Apache Spark, dbt, Airflow, and cloud-native data platforms.

Purpose

To provide expert guidance for designing and implementing robust, scalable, and cost-effective data pipelines and modern data platforms.

Features

  • Design batch or streaming data pipelines
  • Build data warehouses and lakehouse architectures
  • Implement data quality, lineage, and governance
  • Leverage Apache Spark, dbt, Airflow, and cloud platforms

Use Cases

  • Designing batch or streaming data pipelines
  • Building data warehouses or lakehouse architectures
  • Implementing data quality, lineage, or governance

Non-Goals

  • Exploratory data analysis
  • ML model development without pipelines
  • Operations without data source or storage access

Trust

  • warning:Issues AttentionIn the last 90 days, 17 issues were opened and 4 were closed, indicating a closure rate below 50% and potentially slow maintainer response.

Installation

npx skills add davila7/claude-code-templates

Runs the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.

Quality Score

94 /100
Analyzed about 24 hours ago

Trust Signals

Last commit1 day ago
Stars27.2k
LicenseMIT
Status
View Source

Similar Extensions

Airflow Dag Patterns

95

Build production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. Use when creating data pipelines, orchestrating workflows, or scheduling batch jobs.

Skill
wshobson

Senior Data Engineer

95

Data engineering skill for building scalable data pipelines, ETL/ELT systems, and data infrastructure. Expertise in Python, SQL, Spark, Airflow, dbt, Kafka, and modern data stack. Includes data modeling, pipeline orchestration, data quality, and DataOps. Use when designing data architectures, building data pipelines, optimizing data workflows, implementing data governance, or troubleshooting data issues.

Skill
alirezarezvani

Spark Optimization

99

Optimize Apache Spark jobs with partitioning, caching, shuffle optimization, and memory tuning. Use when improving Spark performance, debugging slow jobs, or scaling data processing pipelines.

Skill
wshobson

Spark Engineer

99

Use when writing Spark jobs, debugging performance issues, or configuring cluster settings for Apache Spark applications, distributed data processing pipelines, or big data workloads. Invoke to write DataFrame transformations, optimize Spark SQL queries, implement RDD pipelines, tune shuffle operations, configure executor memory, process .parquet files, handle data partitioning, or build structured streaming analytics.

Skill
jeffallan

Dbt Transformation Patterns

98

Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.

Skill
wshobson

Snowflake Development

98

Use when writing Snowflake SQL, building data pipelines with Dynamic Tables or Streams/Tasks, using Cortex AI functions, creating Cortex Agents, writing Snowpark Python, configuring dbt for Snowflake, or troubleshooting Snowflake errors.

Skill
alirezarezvani

© 2025 SkillRepo · Find the right skill, skip the noise.