EIM Data Engineer

On Site Full TimeGurgaon, Haryana, IndiaVirtusa

Design and build automated ETL/ELT workflows, ingest data from various sources, transform and standardize data formats, and ensure HIPAA and GDPR compliance.

Requirements

  • Pipeline Development: Build and maintain robust data pipelines using Python, SQL, and Spark to process large-scale healthcare claims and salary survey data.
  • Data Normalization: Develop logic to clean and standardize diverse data formats
  • Languages: Expert-level SQL and Python (specifically for data manipulation via Pandas/PySpark).
  • Big Data Tools: Hands-on experience with Databricks, Snowflake, or Hadoop ecosystems.
  • Orchestration: Experience with Airflow or Azure Data Factory for managing complex job dependencies.
  • Modeling: Understanding of Star/Snowflake schemas and Data Vault 2.0 for long-term analytical storage.
  • Deploy and monitor data workloads on Azure (Data Factory/Databricks) or AWS (Glue/Redshift) to ensure high availability and scalability.

Tagged as: , , ,

To apply for this job please visit virtusa.taleo.net.


You can apply to this job and others using your online resume. Click the link below to submit your online resume and email your application to this employer.

Tired of manual job applications?

JobCopilot auto-applies to thousands of RevOps and GTM roles on your behalf — so you can focus on interviews, not applications.

Applying for this role?

Tailor your resume to this exact role — hiring managers notice the difference.

Latest articles on the blog

RECRUITERS!

Reduce the risk of your recruitment process (applicant quality, long and inefficient process) by selecting from a relevant pool of candidates.

POST A NEW JOB NOW!