We are seeking a DataOps Engineer with Data experience to help design, build, and operate reliable, scalable data systems, protecting data from the inside out.
Requirements
- Design, develop, and maintain batch and streaming data pipelines from multiple source systems
- Implement ETL/ELT processes to ingest, transform, and model data for analytics and downstream consumers
- Build and optimize data models, tables, and views in cloud data warehouses or lakehouses
- Enforce data quality, validation, and schema management across pipelines
- Optimize pipeline performance, scalability, and cost efficiency
- Collaborate with analytics and data science teams to support reporting, dashboards, and ML workloads
- Apply DataOps best practices including CI/CD for data pipelines, automated testing, and version control
- Monitor pipeline health, data freshness, and SLAs using observability and alerting tools
- Automate operational tasks such as deployments, backfills, schema evolution, and rollbacks
- Manage and improve production reliability of data systems, including on-call support and incident response
- Implement and maintain infrastructure and orchestration for data workflows
- Improve transparency and trust in data through metadata, lineage, and documentation
- Partner with platform teams on infrastructure as code, security, and access management
Benefits
- Flexible, hybrid work model
- Opportunities for professional growth and development
- Collaborative and supportive work environment
To apply for this job please visit jobs.jobvite.com.

Follow us on social media