About the job
Our vision is to make a healthy and sustainable lifestyle the most attractive choice. So we defy: the status quo, misleading product, and nonsense rules! We truly believe in the power of ideas and are thirsty to realize the ones that could have impact. We're an eclectic bunch of creative minds, experts, builders, and improvers. Our perspectives, cultures, backgrounds, dreams, skills, and life paths may have nothing in common, but we gather behind a vision and sail together towards our ambitious goals.
The Data Engineering team's mission is to empower our users with trusted data that is easy to access, understand and use to support evidence-based business decisions. Together with our data analysts, we are the main knowledge and insights powerhouse that supports the business with data and insights that shapes the data-driven decision-making. Our data infrastructure is based on a modern tech stack, leveraging powerful tools and cloud services.
As Senior Data Engineer, you are responsible for managing our tech stack, architecture, and data ingestion layer. You will work in close collaboration with our data analysts as well as with delivery teams within the tech division to provide and maintain a self-service data platform that delivers value for the business.
This could mean doing a number of things, such as, rolling up your sleeves to help with changes on our web tracking infrastructure, coaching delivery team members to build a data product, participate in design/architecture sessions etc. You’ll be part of the engineering team, shaping the overall vision and direction of our data architecture, which creates the much-needed impact in-line with company objectives.
Your responsibilities
- Collaboration: You will work with data analysts, software development engineers as well as external partners to create data products for internal as well as external use.
- Software Engineering: You will build and operate data products and pipelines. Trunk based as well as test driven development (TDD), monorepo, continuous integration/continuous delivery are practices we use to build and deploy software. We often work in pairs (pair programming) to share knowledge and ensure high quality.
- Architecture: You will use modern data engineering tools and services, while making sure our data architecture works hand in hand with surrounding tools and services.
- Operation/Monitoring: You will leverage our monitoring tools (e.g. Metaplane, Datadog, PagerDuty) to proactively detect data quality problems in our data pipelines.
- Solid experience as a software- and/or data engineering (7+ years). Deep software engineering know-how, passion, and willingness to learn is more important than actual data engineering experience.
- Ideally some experience with complex data modelling, ELT design, and using data lakes, as well as understanding and experience with Big Data technologies like Apache Spark and PySpark and corresponding AWS technologies.
- Experience with languages like Python and SQL.
- Ideally knowledge of data lake house architectures (Apache Iceberg, AWS Glue, S3, Redshift) and data management (Airflow, Fivetran, DuckDB). Experience with infrastructure provisioning tools (like Terraform) considered as a plus.
- Ideally some experience with data mesh and domain-driven modelling.
- Excellent verbal and written communication in English. Great interpersonal skills and ability to collaborate with a multicultural team.