Airflow Dag Patterns
Skill Verifiziert AktivBuild production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. Use when creating data pipelines, orchestrating workflows, or scheduling batch jobs.
To empower users to build production-grade Apache Airflow DAGs efficiently and reliably, incorporating best practices for operators, sensors, testing, and deployment.
Funktionen
- Production-ready Airflow DAG patterns
- Best practices for operators and sensors
- Guidance on DAG testing strategies
- Deployment strategies for Airflow DAGs
- TaskFlow API and dynamic DAG generation examples
Anwendungsfälle
- Creating data pipeline orchestration with Airflow
- Designing complex DAG structures and dependencies
- Implementing custom operators and sensors
- Testing Airflow DAGs locally and in CI/CD
Nicht-Ziele
- Managing the Airflow environment itself (installation, configuration)
- Providing specific ETL/ELT logic beyond the orchestration patterns
- Real-time data processing beyond batch scheduling
Installation
Zuerst Marketplace hinzufügen
/plugin marketplace add wshobson/agents/plugin install data-engineering@claude-code-workflowsQualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
Data Engineer
94Build scalable data pipelines, modern data warehouses, and real-time streaming architectures. Implements Apache Spark, dbt, Airflow, and cloud-native data platforms.
Orchestrate Ml Pipeline
99Orchestrate end-to-end machine learning pipelines using Prefect or Airflow with DAG construction, task dependencies, retry logic, scheduling, monitoring, and integration with MLflow, DVC, and feature stores for production ML workflows. Use when automating multi-step ML workflows from data ingestion to deployment, scheduling periodic model retraining, coordinating distributed training tasks, or managing retry logic and failure recovery across pipeline stages.
Senior Data Engineer
95Data engineering skill for building scalable data pipelines, ETL/ELT systems, and data infrastructure. Expertise in Python, SQL, Spark, Airflow, dbt, Kafka, and modern data stack. Includes data modeling, pipeline orchestration, data quality, and DataOps. Use when designing data architectures, building data pipelines, optimizing data workflows, implementing data governance, or troubleshooting data issues.
Flow Nexus Platform
100Comprehensive Flow Nexus platform management - authentication, sandboxes, app deployment, payments, and challenges
Agent Worker Specialist
100Agent skill for worker-specialist - invoke with $agent-worker-specialist
Do In Parallel
100Launch multiple sub-agents in parallel to execute tasks across files or targets with intelligent model selection, quality-focused prompting, and meta-judge → LLM-as-a-judge verification