We are looking for a DataOps Engineer to own the infrastructure that powers Placer’s large-scale data processing platform. This is a platform-facing role sitting at the intersection of data engineering and infrastructure — you’ll be the person who makes Spark run reliably and efficiently on Kubernetes, so that data engineers can build with confidence.
Requirements
- Design, deploy, and operate the Kubernetes-based infrastructure that runs Apache Spark and large-scale data processing workloads
- Own the reliability, performance, and cost-efficiency of the data platform — including SLAs, autoscaling, resource quotas, and workload isolation
- Manage Spark-on-K8s configurations, Airflow infrastructure, and Databricks integration; tune for throughput, latency, and cost
- Build and maintain CI/CD pipelines and infrastructure-as-code for data platform components
- Develop observability tooling — metrics, logging, alerting, and data quality dashboards — to proactively surface issues across the pipeline stack
- Collaborate closely with Data Engineers to understand workload patterns and translate them into infrastructure decisions
- Manage cloud storage (GCS/S3), Delta Lake, and Unity Catalog infrastructure
- Drive platform improvements end-to-end: from design through deployment and ongoing ownership
Benefits
- Competitive salary
- Excellent benefits
To apply for this job please visit placer.ai.

Follow us on social media