Principal Data Engineer - QuantumBlack

Qualifications

  • Commercial experience leading on client-facing projects, including working in close-knit teams
  • A degree in Computer Science or another relevant field.
  • Experience and interest in Big Data technologies (e.g., Hadoop / Spark / NoSQL DBs)
  • Experience working on projects within the cloud and ideally AWS or Azure
  • Proven ability to clearly communicate complex solutions
  • Experience working on lively projects and in a consulting setting, often working on multiple projects at the same time
  • Strong development background with experience in at least two scripting, object oriented or functional programming languages (e.g., SQL, Python, Java, Scala, C#, R, etc.)
  • Data Warehousing experience, building operational ETL data pipelines across a number of sources, and constructing relational and dimensional data models
  • Experience in at least one ETL tool (e.g. Informatica, Talend, Pentaho, DataStage)
  • Ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets
  • Excellent interpersonal skills when interacting with clients in a clear, timely, and professional manner.
  • Deep personal motivation to always produce outstanding work for your clients and colleagues
  • Excel in team collaboration and working with others from diverse skill-sets and backgrounds

Who You'll Work With

You will join in London as part of QuantumBlack and work as part of the Global Data Engineering team and wider analytics community.

In QuantumBlack we help companies use data to drive decisions. We combine business experience, expertise in large-scale data analysis and visualization, and advanced software engineering know-how to deliver results. From aerospace to finance to Formula One, we help companies prototype, develop, and deploy bespoke data science and data visualisation solutions to make better decisions.

What You'll Do

You will lead on our client projects and work closely to our Data Scientists in order to curate, transform and construct features which feed directly into our modelling approach.

Our pipelines are constructed with Spark, Python, Scala or traditional SQL. This, often leverages cloud infrastructure like AWS and Databricks. However, in some cases we have to be flexible and use whatever environment is provided to us. With this in mind, you are expected to be nimble and creative when it comes to solving the problem at hand. Some puzzles require a novel approach, if you see one, make a case for it.

This is a hybrid client-facing/technical role using cutting edge technologies, whilst also being able to communicate complex intractable ideas to non-technical audiences. Gathering clear requirements is a key part of this role and will define the technical strategy the team employs on the study.

Our projects cover a wide range of industries and may expose you to problem areas such as: Disease epidemiology, athlete injury prediction or salesforce effectiveness optimisation. In order to gain insight from previously ignored and unconnected data you will need to extract information from vast array of different data sources such as: Data Warehouses, SQL databases, legacy applications, unstructured data, documents, emails, APIs, Kafka endpoints and NoSQL databases.


Back to top