Design and build the Apache Spark/PySpark ETL pipeline, implement Apache Iceberg table operations, and collaborate with stakeholders to clarify KPIs and query patterns. Develop business logic connectors and transformation helpers, and write comprehensive unit, integration, and end-to-end tests. Required skills include PySpark, SQL, and cloud data warehousing, Apache Iceberg, dimensional modelling fundamentals, and API design, error handling, and testing discipline.
Requirements
- Design and build the Apache Spark/PySpark ETL pipeline
- Implement Apache Iceberg table operations
- Collaborate with stakeholders to clarify KPIs and query patterns
- Develop business logic connectors and transformation helpers
- Write comprehensive unit, integration, and end-to-end tests
Benefits
- Generous Paid Time Off
- 401k Matching
- Retirement Plan
- Relocation Assistance
To apply for this job please visit fa-etvl-saasfaprod1.fa.ocs.oraclecloud.com.

Follow us on social media