Data Engineer, Analytics Data Products – design, model, and implement complex ELT/ETL pipelines for cleansed and curated data layers in the medallion architecture, develop advanced data transformations using dbt and PySpark, collaborate across teams to define requirements and translate them into robust and scalable data models.
Requirements
- 2+ years of hands-on experience in a Data Engineering, Data Warehousing, Analytics Engineering or equivalent role
- Proficiency in SQL and experience with complex, production-level data modeling
- Demonstrated experience designing, developing, and deploying end-to-end data products through the full Software Development Lifecycle
- Experience with a Cloud Data Warehouse, like BigQuery
- Proficiency in Python for scripting and data manipulation, including knowledge of PySpark or other Spark APIs
- Familiarity with cloud services and data storage components in at least one major cloud provider (GCP or AWS)
- Experience with workflow orchestration tools (e.g., Airflow, Cloud Composer, or Prefect) and version control systems (Git)
- Experience operating in a dual-cloud environment (GCP/AWS)
- Experience with Infrastructure-as-Code (IaC) tools like Terraform
- Experience with advanced Lakehouse file formats like Iceberg or Delta Lake
- Familiarity with experimentation or A/B testing platforms and the data required to support them
- Experience in data product quality standards through integration advanced testing, quality checks, and monitoring into the CI/CD pipeline
Benefits
- Medical, dental and vision benefits
- Flexible Spending Accounts (F.S.A.s)
- 401(k) plan
- Paid vacation
- Paid sick days
- Paid parental leave
- Tuition reimbursement
- Professional development programs
To apply for this job please visit job-boards.greenhouse.io.

Follow us on social media