Join a team passionate about innovation for customers and partners, using expertise to transform how the world uses information. Responsibilities include building and maintaining data pipelines, developing and optimizing data structures, and creating custom software components.
Requirements
- Experience developing, delivering, and/or supporting data engineering, advanced analytics or business intelligence solutions.
- Ability to work with multiple operating systems (e.g., MS Office, Unix, Linux, etc.)
- Experienced in developing ETL/ELT processes using Apache Ni-Fi and Snowflake, GCP Big Query or any Equivalent Data Warehouse.
- Significant experience with big data processing and/or developing applications and data sources via Hadoop, Yarn, Hive, Pig, Sqoop, MapReduce, HBASE, Flume, etc.
- Understanding of how distributed systems work.
- Familiarity with software architecture (data structures, data schemas, etc.)
- Strong working knowledge of databases (Oracle, MSSQL, etc.) including SQL and NoSQL.
- Strong mathematics background, analytical, problem solving, and organizational skills.
- Knowledge in building APIs for application integration
- Experience with continuous integration/continuous delivery (CI/CD) tools (Jenkins, Git, Docker, Kubernetes)
- Outstanding analytical thinking, interpersonal, oral and written communication skills
- Ability to prioritize and meet critical project timelines in a fast-paced environment
- Self-motivated and team oriented

Follow us on social media