Senior Data Engineer - Data Modeling/ETL
Slack is looking for expert data engineers to join our Data Engineering team. In this role, you will be working cross-functionally with business domain experts, analytics, and engineering teams to design and implement our Data Warehouse model. You will design, implement and scale data pipelines that transform billions of records into actionable data models that enable data insights.
You will lead initiatives to formalize data governance and management practices, rationalize our information lifecycle and key company metrics. You will provide mentorship and hands-on technical support to build trusted and reliable domain-specific datasets and metrics.
The candidate will have deep technical skills, be comfortable contributing to a nascent data ecosystem, and building a strong data foundation for the company. They will be a self-starter, detail and quality oriented, and passionate about having a huge impact at Slack.
- Translate business requirements into data models that are easy to understand and used by different disciplines across the company. Design, implement and build pipelines that deliver data with measurable quality under the SLA
- Partner with business domain experts, data analysts and engineering teams to build foundational data sets that are trusted, well understood, aligned with business strategy and enable self-service
- Be a champion of the overall strategy for data governance, security, privacy, quality and retention that will satisfy business policies and requirements
- Own and document foundational company metrics with a clear definition and data lineage
- Identify, document and promote best practices
- Bachelor's degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience
- 5+ years of experience working in data architecture, data modeling, master data management, metadata management
- A consistent track record of close collaboration with business partners and crafting data solutions to meet their needs
- Very strong experience in scaling and optimizing schemas, performance tuning SQL and ETL pipelines in the OLTP, OLAP and Data Warehouse environments
- Deep understanding of relational as well as NoSQL data stores, methods and approaches (logging, columnar, star and snowflake, dimensional modeling)
- Proficiency with object-oriented and/or functional programming languages is a big plus (e.g. Java, Scala, Python, Go)
- Hands-on experience with Big Data technologies (e.g Hadoop, Hive, Spark)
- Excellent written and verbal communication and interpersonal skills, able to effectively collaborate with technical and business partners
- Excellent understanding of trade-offs
- Demonstrated ability to navigate between big-picture and implementation details
Back to top