Data Engineer

About Intercom

The way businesses talk to people online today is broken. Intercom is fixing it.
Intercom is the first to bring sales, marketing & customer service products to one messaging platform, helping businesses avoid the stiff, spammy status quo and have real conversations with the people that matter. Designed to feel like the messaging apps you use every day, Intercom is the only business messenger to let you talk to consumers just about anywhere: inside your app, on your website, across social media and via email.
Intercom invented in-app messaging in 2011, and today more than 13,000 businesses use Intercom to reach a billion people around the world. Intercom has 300 employees split between its San Francisco headquarters and its Dublin R&D office, and has raised $116M in venture funding.

What's the opportunity?

The go-to-market analytics team at Intercom sits at the intersection of marketing, sales, and growth, giving us a high-level view of the entire business. As the company scales, we’re looking for a data engineer to help our business function teams and analysts make better decisions with data. The analytics team at Intercom is trying to understand how businesses attract, sell to, and retain their customers, starting with our own.

As our first data engineer in SF, you’ll work with data analysts, product managers, and the leaders of our business functions to understand their data needs. You’ll not only own and develop our data pipeline, but also design and build internal analytics products on top of our data platform, used by people across the entire company.

This is a unique opportunity to be a founding member of the analytics engineering team. You’ll have an immense impact and input on what tools we use and how we build out our analytics stack. In addition, you’ll have the chance to define the engineering culture in SF and improve our engineering practices.

What will I be doing?

  • Help to increase the accessibility of data to the entire company by designing and implementing a custom ETL pipeline (currently built on MySQL, Redshift, Hadoop, and Spark)
  • Allow the analytics team to do more, faster, by building a logical and understandable data model to house data pulled in from third-party APIs
  • Build systems to track data quality and consistency (and alarm us if something goes wrong), ensuring that our data is accurate and up to date
  • Speed up and simplify our sales commissions process by transforming our billing and sales data
  • Increase our marketing spend ROI by designing and deploying software to automate ad creation and optimize bidding

What skills do I need?

  • Ideally 2 - 4 years experience in a data engineering or similar role
  • Experience with custom ETL design and implementation, data warehousing, schema design, and data modeling. We work in Python, Java, and Scala, but don’t hire based on specific language knowledge
  • Experience with troubleshooting the full stack, from Hadoop down to operating systems and networking
  • Experience with AWS a strong plus
  • Some experience or interest working across the stack to build web applications


We are a well treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us! :)

  • Competitive salary and meaningful equity
  • Relocation assistance available
  • Catered lunch and dinner served every weekday, plus a variety of breakfast foods and a fully stocked kitchen
  • Regular compensation reviews - great work is rewarded!
  • Fully funded comprehensive medical, dental, and vision coverage
  • Open vacation policy and 10 corporate holidays
  • Paid parental leave program
  • 401k plan
  • Commuter benefits
  • In-office bicycle storage
  • MacBooks are our standard, but we’re happy to get you whatever equipment helps you get your job done
  • Fun events for Intercomrades, friends, and family!

Back to top