The Staff Data Engineer will be responsible for translating business problems into data-related solutions, identifying suitable data sources, building data infrastructure, and deploying pipelines using scheduling and orchestration frameworks. The role involves data transformation, integration, and modeling, as well as code development, testing, and data governance.
Requirements
- Bachelor’s degree or equivalent in Computer Science or related field
- 4 years of experience in software engineering or related experience
- OR Master’s degree or equivalent in Computer Science or related field with 2 years of experience
- Experience with designing and implementing big data pipelines and ETLs using Hadoop, Hive, Spark, and Databricks
- Developing ETL pipelines using Scala, Python, Spark, and SQL
- SQL and Spark SQL querying for data analysis and validation
- Spark shell scripting for automated data validation
- Documenting data pipelines and workflows
- Orchestrating and automating pipeline solutions using tools and frameworks
- Writing SQLs for data querying from RDBMS and data warehousing frameworks
- Developing and maintaining modules in cloud ecosystem
- Working with Parquet, AVRO, CSV, ORC, and JSON
- Working in different file-based storage systems
- Creating and automating dashboards using Power BI, Tableau, and Looker
- Leading large-scale projects and providing guidance to team members
- System architecture, HLD to detailed design, and disaster recovery
- Resource modeling, optimization, and forecasting
- Applying incident management, change management, defect-reduction, and system-reliability processes
Benefits
- Paid Time Off
- 401k Matching
- Retirement Plan
- Visa Sponsorship

Follow us on social media