Senior Data Engineer

OVERVIEW

We’re hiring a Senior Data Engineer to join our growing Data Engineering Team to elevate Glossier as an inviting, fun way to buy beauty products. You’ll join a team focused on building a delightful e-commerce and in-store experience, and fostering the most welcoming community to discuss skin care and makeup.

As a Senior Data Engineer at Glossier you are responsible for (i) delivering and maintaining highly available computing platforms, (ii) creating data integration services, and (iii) building data products. Points (i) and (ii) regard systems that collect and transform data from a number of sources, storing it in highly optimized database systems. The services will largely be written in Python and data will be stored in a combination of Snowflake, Redshift, and Amazon S3. You will also maintain ongoing reliability, performance, and support of the infrastructure. This includes monitoring the computing environments and providing solutions based on application needs and anticipated growth.

Point (iii) regards the building of custom services that are accessible and internal web-applications that use computed data to aid our business. These include search engines and recommendation systems, among other things.

As a Senior Data Engineer, you will also take an active role in designing the future of the data engineering practice at Glossier. A successful candidate will combine strong technical skills, a passion for creative problem solving, and an intense curiosity.

 

At Glossier we are also mindful of building an inclusive culture, where decisions are made transparently and we support each other's learning and growth. Data is a critical component at Glossier and ensuring consistent, reliable access to our data is a significant strategic priority.

We expect you to have good communication skills and collaborate across functions to deliver robust solutions in correspondence with the business needs of the company.

 

OUR DATA STACK

  • Data Warehousing with Snowflake, AWS Aurora, Redshift
  • AWS for serving infrastructure
  • Python, JavaScript, and TypeScript
  • DBT and Luigi for ETLs
  • Fivetran & Stitch
  • Segment
  • Looker
  • Docker

 

6 MONTH EXPECTATIONS

  • Have contributed to the data pipeline process, creating new custom integrations that bring data into our systems
  • Are comfortable working with our technology stack
  • Have taken an active role in designing the future of our data engineering system
  • Collaborate with team on best practices and overall business strategy

 

12 MONTH EXPECTATIONS

  • Deliver data engineering system—including data pipeline and data warehousing components—that is simple, reliant, and performant
  • Have made numerous contributions to our data pipelining system and data monitoring
  • Have taken a lead role in managing the development and architecture design of our data infrastructure

 

SKILLS AND QUALIFICATIONS

  • 3 - 5 years of experience working in software development, data engineering, or related STEM fields
  • 3+ years of working experience with various relational databases and data warehousing
  • Proven track record of excellence in delivering production-grade software that scales to thousands of users
  • Strong programming skills in Python and SQL
  • Practical experience in best practices for developing data pipelining frameworks
  • Experience in Linux is a plus
  • Ability to learn autonomously and quickly
  • Analytical, creative and commercial mindset
  • Extremely organized and detail-oriented with effective multitasking and prioritization skills
  • Highly motivated, willing to take ownership of work, drive to solve problems and work effectively under pressure
  • Excellent written and verbal communication skills, willing to proactively engage other team members in fostering a strong collaborative team-oriented environment

Back to top