Software Engineer - Data Infrastructure
Coda is looking for a software engineer who will help build reliable, distributed data systems and pipelines that enable product analytics, experimentation, and business warehouses.
Your work will unlock insights enabling our team to execute with speed and confidence. Your data systems will allow us to tailor the product experience based on our customers' needs and behaviors, while simultaneously empowering our marketing and sales teams to reach and engage our users effectively.
As a member of Coda’s engineering team, you will operate as a software engineer and have the opportunity to work broadly across our product from our mobile and browser-based clients to our servers and infrastructure. You’ll work closely with a stellar team of passionate, experienced engineers, designers and product managers who've have been instrumental in building some of the most widely-used technology products in the world, including YouTube, Google Drive/Docs, Amazon AWS, Pinterest, and Microsoft Azure.
If you are data curious, excited about designing data systems and tools that have tangible impacts on our business and product, we'd love to hear from you! This is an incredible opportunity to have an outsized impact on the future direction of data within Coda.
Our current stack focuses on TypeScript, Python and Node with our server infrastructure running on Kubernetes in Amazon AWS, Snowflake for warehousing, and Apache Airflow for orchestration. We believe in using the best tool for the job in hand, and don't shy away from solving hard problems!
Key responsibilities include
- Building the system for collecting, processing and storing events in real time from browsers, mobile apps, servers and other third party services.
- Developing and evangelizing event schemas and logging patterns across the entire engineering team to power product analytics. Doing this with an eye towards security and compliance (such as GDPR, and CCPA).
- Building robust ETL pipelines from different sources (S3, Relational Databases) to Snowflake with Apache Airflow.
- Gathering and understanding internal data requirements, working in the team to achieve high-quality data ingestion and build systems that can process the data and transform the data, providing ad-hoc access to large data-sets
- Development of our experimentation infrastructure used for A/B testing
Skills and Experience
- Bachelor's degree or equivalent experience in a technical focused discipline such as computer science, engineering or math
- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
- You have worked with data warehousing systems such as Snowflake, Oracle, Redshift, Teradata, SQL Server, etc.
- Experience in configuring and deploying into AWS (or other public clouds)
- Experience providing technical leadership and mentor other engineers for the best practices on the backend data engineering space
Back to top