Join a purpose driven winning team, committed to results, in an inclusive and high-performing culture. We are looking for a hands-on Data Engineer with deep expertise in Apache Spark and strong programming skills in Python, Scala, and Java. This role is centered on building high performance, scalable data pipelines and processing large datasets in a distributed environment.
Requirements
- Design, develop, and maintain large-scale Spark applications using Python, Scala, and/or Java
- Build and optimize batch and streaming data pipelines in distributed environments
- Write production-quality Spark code with strong focus on performance, scalability, and reliability
- Optimize Spark jobs (partitioning, caching, shuffles, memory tuning, execution plans)
- Develop reusable Spark frameworks, libraries, and utilities
- Work with structured and semi-structured data (Parquet, Delta, CSV, JSON)
- Collaborate with platform, analytics, and data science teams to support downstream use cases
- Debug and troubleshoot Spark job failures and performance issues in production
- Follow best practices for code quality, testing, logging, and documentation
Benefits
- Diversity, Equity, Inclusion & Allyship
- Accessibility and Workplace Accommodations
- Upskilling through online courses, cross-functional development opportunities, and tuition assistance
- Competitive Rewards program including bonus, flexible vacation, personal, sick days and benefits
To apply for this job please visit jobs.scotiabank.com.

Follow us on social media