We’re on a mission to bring sanity and clarity to Dev & Ops. We need you to build massively scalable, elegant systems that turn billions of data points per day into meaning for our customers. If you’re excited to work on a fast-moving team with the best open-source data tools at high scale, we want to meet you.
What You Will Do
- Build distributed, high-volume data pipelines to power new product features based on analytics and machine-learning
- Do it with Hadoop, Spark, Luigi, Kafka and other open-source technologies
- Work all over the stack, moving fluidly between programming languages: Python, Java, Pig, Go, Scala, and more
- Join a tightly knit team solving hard problems the right way
- Own meaningful parts of our service, have an impact, grow with the company
Who You Must Be
- You have a BS/MS/PhD in a scientific field
- You have built and operated data pipelines for real customers in production systems
- You are fluent in several programming languages (JVM & otherwise)
- You enjoy wrangling huge amounts of data and exploring new data sets
- You tend to obsess over code simplicity and performance
- You want to work in a fast, high growth startup environment
- You are deeply familiar with Hadoop and/or Spark
- In addition to data pipelines, you’re also quite good with Chef or Puppet
- You’ve built applications that run on AWS
- You have opinions about Lambda architecture / Kappa architecture
- You’ve built your own data pipelines from scratch, know what goes wrong, and have ideas for how to fix it
Is this you? Send your resume and link to your GitHub.
Meet Some of Datadog's Employees
Samantha strives to build a product that’s reliable and user-friendly by working on both the front-end and back-end code to improve Datadog’s functionality and platform capabilities.
Back to top