Join our innovative team as a Senior Developer and make a meaningful impact on global science and healthcare. In this role, you’ll design and develop sophisticated software solutions that contribute to improving health and environmental outcomes.
Requirements
- Proven experience (5+ years) in data engineering with a strong understanding of modern data architecture, pipelines, and operations.
- 5 years of demonstrated expertise in Databricks and Spark, Python, Apache, SQL including pipeline creation, automation, and monitoring
- 5 years or more hands-on experience crafting and administrating relational database solutions, including AWS Delta Lake, Oracle, and AWS Redshift
- 5 years or more experience handling and managing AWS products, including Spark, Glue, Kafka, Elastic Search, Lambda, S3, Redshift, and others
- Strong problem-solving skills with debugging, performance tuning, and AI/ML model deployment in cloud environments
- Prior experience with tools like Power BI, Cognos, SQL Server, and Oracle is a strong plus
- Deep understanding of data modeling, metadata management, and data governance frameworks
- Experience building, scheduling, and monitoring data workflows using Apache Airflow
- 5 years of solid experience in DevOps/DataOps or equivalent roles, version control platforms (such as GitHub), and continuous integration and delivery pipelines
- Demonstrated experience in leading engineering projects and managing project lifecycles
- A self-starter with the drive and ability to deliver complex solutions rapidly
- Communicate effectively with technical and non-technical personnel in oral and written form
- Ensure stable and timely data pipelines that support period-end and quarter-end financial close processes, enabling accurate reporting and reconciliations
- Monitor, solve, and optimize finance workflows during close cycles to guarantee data completeness, performance, and SLA compliance
To apply for this job please visit www.careers-page.com.

Follow us on social media