Senior Analytics Engineer

InPost Group

Company Description

InPost Group is an innovative European out-of-home deliveries company, revolutionizing the way parcels are delivered to customers. With operations across several countries, our network of intelligent lockers (Paczkomat®) provides customers with a fast, convenient, and secure delivery option. Our mission is to provide best-in-class user experience for merchants and consumers. “Simplify everything” – redefining e-commerce logistics. We work by innovating the market with constant technological research and with meticulous attention to the customer.

The Data & AI department is seeking a Senior Analytics Engineer to join our Core Team. In this role, you’ll shape analytical standards and implement innovative solutions, impacting our operations across Poland and 7 international markets. Remote work is possible.

Daily you’ll work with: Apache Spark in Databricks, Databricks various features, Python/PySpark, SQL, Kafka, Power BI, GitLab, Google BigQuery, inhouse data modeling tool.

Job Description

On a daily basis you will:

  • Drive innovation and improvements by evaluating new tools (e.g., Data Quality monitoring) and platform features (e.g., Genie Space on Databricks).
  • Monitor the effectiveness of solutions by tracking implemented actions (e.g., naming convention adherence, metadata completeness, MR quality).
  • Define workflows and coding standards for style, maintainability, and best practices on the analytical platform.
  • Evangelize platform users on the best practices for its use and encouraging teams to continuously improve their working methods. Advocate for coding standards through various workshops and guidelines.
  • Monitor the market for new tools and methodologies in data product development area.
  • While the role involves conceptual work, you’ll also have opportunities for hands-on coding, such as analyzing AI readiness and implementing AI solutions to automate data development tasks.
  • Work with various Data&AI competencies (Data Consultants, Data Engineers, AI Engineers, Cloud Engineers, Data Architect)

Qualifications

Which skills should you bring to the pitch:

  • At least 5 years of experience in an analytical role working with large datasets
  • Experience in data modeling and implementing complex data-driven solutions is a strong plus
  • Excellent proficiency in Python/PySpark for data analysis, SQL for data processing, bash scripting to manage Git repositories
  • Comprehensive understanding of the technical aspects of data warehousing, including dimensional data modeling and ETL/ELT processes
  • Experience with real-time data processing and the ability to handle data from various backend/frontend systems.
  • Familiarity with cloud-based data platforms (GCP/Azure/AWS)
  • The ability to present technical concepts and solutions to diverse audiences
  • Self-motivated with the ability to work independently and manage multiple tasks
  • Excellent interpersonal skills with the ability to collaborate effectively with cross-functional teams
  • Fluent in English: verbal and written

Nice to have:

  • Experience in working with Apache Spark in Databricks
  • Familiarity with modern data building tools like Apache Airflow, DBT
  • Familiarity with data visualization tools such as PowerBI/Tableau/Looker
  • Knowledge of data governance principles and practices
  • Ability to thrive in a highly agile, intensely iterative environment
  • Positive and solution-oriented mindset

Additional Information

The course of the recruitment process:

  1. HR Interview
  2. Devskiller test
  3. Technical Interview (60 min)
  4. Home task
  5. Home task presentation and discussion (60 min)

To apply for this job please visit remotive.com.

Latest articles on the blog

RECRUITERS!

Reduce the risk of your recruitment process (applicant quality, long and inefficient process) by selecting from a relevant pool of candidates.

POST A NEW JOB NOW!