Data Engineer

    • New York, NY

Peloton is looking for a Data Engineer to build our Data Warehouse and Data Pipelines. You will work with multiple teams of passionate and skilled data engineers, architects, and analysts responsible for building batch and streaming data pipelines that process terabytes of data daily and support all of the analytics, business intelligence, data science and reporting data needs across the organization.

Peloton is a cloud first engineering organization with all of our data infrastructure in AWS leveraging EMR, AWS Glue, Redshift, S3, Spark.  You will be interacting with many business teams including marketing, sales, supply chain, logistics, finance and partner to scale Peloton’s data infrastructure for future strategic needs.

Responsibilities

  • Understand the data needs of different stakeholders across multiple business verticals including Finance, Marketing, Logistics, Product etc. 
  • Develop the vision and map strategy to provide proactive solutions and enable stakeholders to extract insights and value from data.
  • Understand end to end data interactions and dependencies across complex data pipelines and data transformation and how they impact business decisions.
  • Design best practices for big data processing, data modeling and warehouse development throughout the company. 

Requirements

  • Familiar with at least one of the programming languages: Python, Java.
  • Comfortable with Linux operating system and command line tools such as Bash.
  • Familiar with REST for accessing cloud based services.
  • Excellent knowledge about databases, such as PostgreSQL and Redshift.
  • Has experiences with GIT, Github, JIRA and SCRUM.
  • 2+ years in building a data warehouse and data pipelines. Or, 3+ years in data intensive engineering roles.
  • Experience with big data architectures and data modeling to efficiently process large volumes of data.
  • Background in ETL and data processing, know how to transform data to meet business goals.
  • Experience developing large data processing pipelines on Apache Spark.
  • Experience with Python or Java programming languages.
  • Strong understanding of SQL and working knowledge of using SQL(prefer PostgreSQL and Redshift)  for various reporting and transformation needs.
  • Excellent communication, adaptability and collaboration skills.
  • Experience running Agile methodology and applying Agile to data engineering.
  • Experience with Java, JDBC, AWS, SDK

Nice to have

  • Familiar with AWS ecosystem, including RDS, Glue, Athena, etc.
  • Has experiences with Apache Hadoop, Hive, Spark and PySpark.

ABOUT PELOTON

Peloton is the largest interactive fitness platform in the world with a loyal community of more than 2.6 million Members. The company pioneered connected, technology-enabled fitness, and the streaming of immersive, instructor-led boutique classes for its Members anytime, anywhere. Peloton makes fitness entertaining, approachable, effective, and convenient, while fostering social connections that encourage its Members to be the best versions of themselves. An innovator at the nexus of fitness, technology, and media, Peloton has reinvented the fitness industry by developing a first-of-its-kind subscription platform that seamlessly combines the best equipment, proprietary networked software, and world-class streaming digital fitness and wellness content, creating a product that its Members love. The brand's immersive content is accessible through the Peloton Bike, Peloton Tread, and Peloton App, which allows access to a full slate of fitness classes across disciplines, on any iOS or Android device, Fire TV, Roku, Chromecast and Android TV. Founded in 2012 and headquartered in New York City, Peloton has a growing number of retail showrooms across the US, UK, Canada and Germany. For more information, visit www.onepeloton.com.


Back to top