Distributed Systems Engineer

We’re on a mission to bring sanity to cloud operations and we need you to build the data pipelines to ingest, store, analyze and query hundreds of billions of events a day. 

Join us to build powerful and resilient data systems.

What you will do

  • Build distributed, high-throughput, real-time data pipelines
  • Do it in Go and Python, with bits of C or other languages
  • Use Kafka, Redis, Cassandra, Elasticsearch and other open-source components
  • Join a tightly knit team solving hard problems the right way
  • Own meaningful parts of our service, have an impact, grow with the company

Who you must be

  • You have a BS/MS/PhD in a scientific field
  • You have significant experience with Go or Python on the back-end
  • Before that, you've mastered a JVM Language or C/C++
  • You can get down to the low-level when needed
  • You tend to obsess over code simplicity and performance
  • You want to work in a fast, high growth startup environment

Bonus points

  • You wrote your own data pipelines once or twice before (and know what you did wrong)
  • You have battle scars with Cassandra, Hadoop, Kafka or Numpy
  • You are very curious about Apache Spark
  • You have a strong background in Stats

Is this you? Send your resume and link to your Github


Meet Some of Datadog's Employees

Samantha D.

Software Engineer

Samantha strives to build a product that’s reliable and user-friendly by working on both the front-end and back-end code to improve Datadog’s functionality and platform capabilities.

Toni L.

Associate Manager, Event Marketing

Toni curates and organizes events that drive business for Datadog. She ensures every workshop, tradeshow, and internal corporate shindig drives engagement and builds lasting connections.


Back to top