DataOps Engineer

Remote Full TimeTel-Aviv District, Israel (Remote)Placer.ai

We are looking for a DataOps Engineer to own the infrastructure that powers Placer’s large-scale data processing platform. This is a platform-facing role sitting at the intersection of data engineering and infrastructure — you’ll be the person who makes Spark run reliably and efficiently on Kubernetes, so that data engineers can build with confidence.

Requirements

  • Design, deploy, and operate the Kubernetes-based infrastructure that runs Apache Spark and large-scale data processing workloads
  • Own the reliability, performance, and cost-efficiency of the data platform — including SLAs, autoscaling, resource quotas, and workload isolation
  • Manage Spark-on-K8s configurations, Airflow infrastructure, and Databricks integration; tune for throughput, latency, and cost
  • Build and maintain CI/CD pipelines and infrastructure-as-code for data platform components
  • Develop observability tooling — metrics, logging, alerting, and data quality dashboards — to proactively surface issues across the pipeline stack
  • Collaborate closely with Data Engineers to understand workload patterns and translate them into infrastructure decisions
  • Manage cloud storage (GCS/S3), Delta Lake, and Unity Catalog infrastructure
  • Drive platform improvements end-to-end: from design through deployment and ongoing ownership

Benefits

  • Competitive salary
  • Excellent benefits

To apply for this job please visit placer.ai.

Tired of manual job applications?

JobCopilot auto-applies to thousands of RevOps and GTM roles on your behalf — so you can focus on interviews, not applications.

Applying for this role?

Tailor your resume to this exact role — hiring managers notice the difference.

Latest articles on the blog

RECRUITERS!

Reduce the risk of your recruitment process (applicant quality, long and inefficient process) by selecting from a relevant pool of candidates.

POST A NEW JOB NOW!