Build scalable data pipelines and architectures as a Data Engineer at PALO IT, a global technology consultancy. Work with cross-functional teams to transform raw data into actionable insights. Collaborate with data scientists, analysts, and business teams to ensure data quality, governance, security, and availability across multiple systems.
Requirements
- Design, develop, and maintain scalable ETL/ELT pipelines
- Build and optimize data lakes and data warehouse solutions
- Develop robust data ingestion and transformation processes using Python, SQL, and Scala
- Work with distributed data processing frameworks such as Apache Spark and Hadoop
- Orchestrate and monitor workflows using Apache Airflow
- Ensure data quality, governance, security, and availability across multiple systems
- Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements and deliver high-quality datasets
- Optimize database performance and manage large-scale datasets across SQL and NoSQL technologies
- Implement best practices for version control, CI/CD, testing, and code quality using Git
- Participate in architecture discussions and contribute to technical decision-making
- Support cloud-based data solutions leveraging AWS, GCP, or Azure ecosystems
- Continuously improve platform scalability, performance, and operational excellence through automation and AI-assisted engineering practices
Benefits
- Stimulating working environments
- Unique career path
- International mobility
- Internal R&D projects
- Knowledge sharing
- Personalized training
- Entrepreneurship & intrapreneurship
To apply for this job please visit job-boards.greenhouse.io.

Follow us on social media