We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and enable data-driven decision-making.
Requirements
- 3–5 years of hands-on experience in data engineering or related roles.
- Strong proficiency in programming languages such as Python, Scala, or Java.
- Experience with SQL and NoSQL databases, and strong understanding of data modeling concepts.
- Practical experience with big data technologies like Apache Spark, Hadoop, or similar frameworks.
- Familiarity with cloud platforms such as AWS, Google Cloud, or Azure, including data services (e.g., S3, BigQuery, Redshift, or Azure Data Lake).
- Experience building and maintaining ETL/ELT pipelines using tools like Airflow, dbt, or similar orchestration frameworks.
- Solid understanding of machine learning workflows, including data preprocessing, feature engineering, and model lifecycle support.
- Knowledge of streaming technologies like Kafka or Kinesis is desirable.
- Strong problem-solving skills and the ability to work with large, complex datasets.
- Experience working in cross-functional teams involving data science and product engineering.
- Understanding of MLOps principles and tools for model deployment and monitoring.
- Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes.
- Experience with version control systems like Git and CI/CD pipelines.
To apply for this job please visit jobs.workable.com.

Follow us on social media