We are looking for a senior Data Engineer to join our team and work with cutting-edge cloud technologies like Databricks, Kafka, Spark, DBT and Python libraries to build robust data pipelines and scalable infrastructure.
Requirements
- Design, implement, and maintain robust and scalable data pipelines using AWS, Azure, and containerization technologies.
- Develop and maintain ETL/ELT processes to extract, transform, and load data from various sources into data warehouses and data lakes.
- Collaborate with data scientists, analysts, and other engineers to ensure seamless data flow and availability across the organization.
- Optimize data storage and retrieval performance by utilizing cloud services like AWS Redshift, Azure Synapse, or other relevant technologies.
- Monitor, troubleshoot, and optimize data processing pipelines for performance, reliability, and cost-efficiency.
- Automate manual data processing tasks and improve data quality by implementing data validation and monitoring systems.
- Implement and maintain CI/CD pipelines for data workflow automation and deployment.
- Ensure compliance with data governance, security, and privacy regulations across all data systems.
- Participate in code reviews and ensure the use of best practices and documentation for data engineering solutions.
- Stay up-to-date with the latest data engineering trends, cloud services, and technologies to continuously improve system performance and capabilities.
Benefits
- Competitive salary and flexible payment method.
- Opportunities for growth and professional development.
- Flexible working hours and full remote work opportunity.
- Work in a collaborative, innovative and inclusive environment.
- Be a part of a data-driven culture that is at the forefront of innovation.

Follow us on social media