The Sr. Data Engineer is a pivotal role within the Finance Data Hub and the Enterprise Data platform, responsible for establishing standards, building frameworks, and elevating engineering capabilities across data organization.
Requirements
- B E/M.Tech in Electrical/Electronics/Computer Science
- 10+ years of experience
- End-to-end delivery of production data pipelines at enterprise scale: ingestion, transformation, orchestration, and serving layers. Strong SQL and Python proficiency
- Experience with both batch and streaming paradigms
- Technical leadership in a cross-functional environment — setting standards, mentoring engineers, conducting design reviews, and influencing engineering direction without necessarily holding a direct management title
- Deep hands-on Snowflake expertise: data sharing, zero-copy cloning, dynamic tables, streams and tasks, RBAC design, row access policies, dynamic masking, warehouse sizing, and query optimization. Snowflake certification is a strong plus
- Proficient with GitHub for version control, pull request workflows, and GitHub Actions for CI/CD automation. Experience designing branching strategies and automated test/deploy pipelines for data workloads
- Hands-on experience building transformation tools — models, tests, macros, packages, sources, and exposures. Coalesce experience or familiarity is an advantage. Understanding of DAG-based transformation orchestration
- Has built or adopted reusable automated unit testing frameworks for data pipelines or transformation models. Understands test pyramid concepts in a data context: unit, integration, and contract tests
- Has designed and implemented RLS frameworks at the platform layer (e.g., Snowflake row access policies). Understands the intersection of data governance policy and platform enforcement
- Has implemented data quality monitoring frameworks and observability instrumentation in production environments
- Strong grasp of medallion architecture (Bronze/Silver/Gold), dimensional modeling (star schema, SCD types), and modern lakehouse/warehouse modeling patterns. Has published or enforced modeling standards
- Has led or meaningfully contributed to a data engineering modernization initiative — re-platforming, cycle time reduction, or adoption of modern tooling. Can articulate before/after outcomes with metrics
- Has experimented with or productionised GenAI tools to enhance data engineering workflows — AI code assistants, LLM-powered documentation, natural language querying, or AI-driven anomaly analysis.
To apply for this job please visit eaton.eightfold.ai.

Follow us on social media