ShyftLabs is seeking a skilled Data Engineer to support in designing, developing, and optimizing big data solutions using the Databricks Unified Analytics Platform.
Requirements
- Design, implement, and optimize big data pipelines in Databricks.
- Develop scalable ETL workflows to process large datasets.
- Leverage Apache Spark for distributed data processing and real-time analytics.
- Implement data governance, security policies, and compliance standards.
- Optimize data lakehouse architectures for performance and cost-efficiency.
- Collaborate with data scientists, analysts, and engineers to enable advanced AI/ML workflows.
- Monitor and troubleshoot Databricks clusters, jobs, and performance bottlenecks.
- Automate workflows using CI/CD pipelines and infrastructure-as-code practices.
- Ensure data integrity, quality, and reliability in all pipelines.
Benefits
- Hybrid Flexibility
- Comprehensive Benefits
- Growth & Learning
- Inclusion at ShyftLabs

Follow us on social media