Design, develop, and maintain scalable ETL/ELT pipelines using Databricks, build and optimize data workflows, implement data ingestion from multiple sources, and work with cloud platforms.
Requirements
- Design, develop, and maintain scalable ETL/ELT pipelines using Databricks.
- Build and optimize data workflows using Apache Spark (PySpark/Scala).
- Implement data ingestion from multiple sources (APIs, databases, streaming platforms).
- Develop and manage data lakes and lakehouse architectures.
- Work with cloud platforms such as Amazon Web Services, Microsoft Azure, or Google Cloud Platform.
- Optimize performance of queries and large-scale data processing jobs.
- Ensure data quality, governance, and security best practices.
- Collaborate with data scientists, analysts, and business stakeholders to deliver data solutions.
- Implement CI/CD pipelines and version control for data engineering workflows.
To apply for this job please visit jobs.workable.com.

Follow us on social media