3+ years of hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing, strong experience with AWS services, and proficiency in programming languages such as Python, PySpark, and TypeScript/JavaScript.
Requirements
- 3+ years of hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing.
- Strong experience with AWS services for data processing and storage (e.g., S3, Glue, Athena, Lambda, Redshift).
- Proficiency in programming languages such as Python, PySpark, and TypeScript/JavaScript.
- Deep understanding of microservices architecture and distributed systems.
- Familiarity with AI/ML tools and frameworks (e.g., TensorFlow, PyTorch) and their integration into data pipelines.
- Experience with big data technologies like Snowflake.
- Strong problem-solving and performance optimization skills.
- Exposure to modern DevOps practices, including CI/CD pipelines and container orchestration tools like Docker and Kubernetes.
- Experience working in agile environments delivering complex data engineering solutions.
- Proven expertise or certification in Palantir Foundry is highly preferred.
- Prior experience in the insurance domain is highly desirable.
Benefits
- Paid time off
- 401k matching
- Retirement plan
- Health insurance
- Dental insurance
- Vision insurance
To apply for this job please visit fa-ewjt-saasfaprod1.fa.ocs.oraclecloud.com.

Follow us on social media