Data Engineer - Content Agency is looking for data savvy professionals to join our team. As a Data Engineer, you will be working with stakeholders throughout the company to ensure we have high quality data to power our business in all departments. Decisions for every part of our business -- from the front end experience to performance marketing to partner products -- are driven by petabytes of our data via MySQL, Hadoop, Cassandra, etc. Your challenge will be to make sure we can utilize our petabyte scale data even more effectively to support business decisions and improve our products. Joining a cross functional team of developers, designers, data scientists, and product owners, we invite you to help us crunch the data to ensure our place as the planet's #1 accommodation site.  


As a Data Engineer, you are responsible for the development, performance, quality, and scaling of our data pipelines, with a special focus on data quality. You will work independently and will also be responsible for making technical decisions within a team.

Important aspects of the job include:

  • Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
  • Solving issues with data and data pipelines, prioritizing based on customer impact.
  • End-to-end ownership of data quality in our core datasets and data pipelines.
  • Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
  • Providing tools that enhance Data Quality company wide.
  • Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
  • Acting as an intermediary for problems, with both technical and non-technical audiences.
  • Contributing to the growth of through interviewing, on-boarding, or other recruitment efforts.


We are looking for driven Data Engineers who enjoy solving problems, who initiate solutions and discussions and who believe that any challenge can be scaled with the right mindset and tools.

The ones that fit us best are the people who match the following requirements:
  • Minimum of 3 years of experience in the field, using 2 or more server side programming languages -- preferably Scala, Java, Python, Perl, etc.
  • Experience with building data pipelines in distributed environments with technologies such as Hadoop, Cassandra, Kafka, Spark, HBase, MySQL, etc.
  • Demonstrable experience with SQL, HQL, CQL, etc.
  • Hands on experience of developing in and contributing to open source data technologies, such as Hadoop
  • Experience of working on systems on large scale
  • Good understanding of basic analytics and machine learning concepts
  • Preferably a university degree in Mathematics or Software Engineering
  • Excellent communication, written and spoken

  • This position is open to worldwide candidates and, in the case of relocation, we will assist you with a generous relocation package, ensuring a smooth transition to working and living in Amsterdam.  

Back to top