We are seeking a highly skilled and experienced Lead Hadoop Data Engineer to join our growing data engineering team. The ideal candidate will have strong expertise in Big Data technologies, Hadoop ecosystem components, Python programming, SQL development, and hands-on experience working in on-premise data environments.
Requirements
- 6 to 13 years of experience in Data Engineering or Big Data development
- Strong hands-on expertise in Hadoop and Big Data ecosystem technologies
- Proficiency in Python scripting and advanced SQL programming
- Experience working with Hadoop components such as HDFS, Hive, Spark, Sqoop, Kafka, MapReduce, Pig, or HBase
- Solid understanding of distributed computing and large-scale data processing concepts
- Strong experience in building ETL/data pipeline solutions in enterprise environments
- Hands-on experience managing or working within on-premise infrastructure environments
- Good understanding of data warehousing concepts and relational databases
- Experience with Linux/Unix environments and shell scripting
- Strong analytical, troubleshooting, and performance optimization skills
- Familiarity with workflow orchestration tools such as Airflow or Oozie is preferred
- Excellent communication and stakeholder management abilities
Benefits
- Competitive salary
- Benefits package
- Opportunities for career growth
To apply for this job please visit jobs.workable.com.

Follow us on social media