Senior Data Engineer role at UPS, supervising and participating in data pipeline development, data integration, and data cleansing. Responsible for data engineering maintenance, support, and implementation of systems and applications software. Requires strong knowledge of data analytics processing frameworks, database systems, and cloud solutions.
Requirements
- Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development.
- Develops and delivers data engineering documentation.
- Gathers requirements, defines the scope, and performs the integration of data for data engineering projects.
- Recommends analytic reporting products/tools and supports the adoption of emerging technology.
- Performs data engineering maintenance and support.
- Provides the implementation strategy and executes backup, recovery, and technology solutions to perform proof of concept (POC) analysis.
- Builds data APIs to enable data scientists and business intelligence analysts to query the data.
- Codes using programming language used for statistical analysis and modeling such as Python/Java/Scala/C++.
- Strong understanding of database systems and data warehousing solutions.
- Strong understanding of the data interconnections between organizations’ operational and business functions.
- Strong understanding of the data life cycle stages – data collection, transformation, analysis, storing the data securely, providing data accessibility
- Strong understanding of the data environment to ensure that it can scale for the following demands: Throughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance.
- Strong knowledge of algorithms and data structures, as well as data filtering and data optimization.
- Strong understanding of analytic reporting technologies and environments (e.g., PBI,.)
- Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages.
- Proficiency in Databricks notebooks using Python, SQL, and PySpark for data transformation, modeling, and analysis.
- Strong communication and storytelling skills to present complex data concepts in clear, non-technical language.
- Experience working with cross-functional teams (product, operations, finance, marketing, compliance).
- Understanding of business KPIs, metrics, and domain-specific data models to ensure data solutions support decision-making.
- Requirements gathering and stakeholder management to ensure end pipelines meet user needs.
- Knowledge of Unity Catalog for centralized governance, lineage, permissions, and secure data sharing.
- Understanding of data versioning, ACID transactions, and time travel using Delta Lake.
- Understanding of Machine learning algorithms which help data scientists make predictions based on current and historical data.
- Understanding of distributed systems and the underlying business problem being addressed, as well as guides team members on how their work will assist by performing data analysis and presenting findings to the stakeholders.
- Knowledge of algorithms and data structures with the ability to organize the data for reporting, analytics, and data mining and perform data filtering and data optimization.
Benefits
- 401k Matching
- Retirement Plan
- Paid Time Off
To apply for this job please visit hcmportal.wd5.myworkdayjobs.com.

Follow us on social media