Principal Data Engineer
- Oakland, CA
Are you looking to join an innovative organization powering payments for the next generation of fintech and commerce innovators? Marqeta has built the world’s first open API issuer processor platform from scratch, powering prepaid, debit, and credit cards for the most recognizable names in financial technology, alternative lending, on-demand services and ecommerce. Marqeta has become the leader in payment innovation. Our company is comprised of a team of industry experts, a dynamic approach to working on challenging problems, and an open environment and culture that is focused on ideas and innovation.
Not only do we have an inspiring and innovative culture, but only Marqeta can offer you a chance to help redefine the payments industry - it's very exciting around here these days. As a testament to the company we've collectively built, our world-class team voted Marqeta one of the Bay Area’s Best Places to Work. So take a look at our current career opportunities, introduce yourself and tell us why you would be a great addition to our team – we adore meeting top talent!
We are seeking an experienced Principal Data Engineer to lead the technical development of our data architecture and self-service capabilities. You will be responsible for working with several internal customer teams to map their needs and working with Data & Insights team members to design and implement scalable and automated real-time data solutions. You should be comfortable working in a hyper growth startup environment requiring fast scaling solutions and rapid iterations where we want you to take risks.
About Our Team
The newly formed Data & Insights team has brought together Data Analytics, Data Science, Data Platform Engineering, and Data Product Managers in order to build trustworthy, scalable, resilient data sets, insights and data experiences that are actively leveraged to design new innovative products, build new business models, reimagine customer journeys, and improve operational efficiencies. We use SQL, Python, R, Airflow, Kafka and many other open-source tools every day. We partner with our product engineering teams, business groups, and customers each day to build the right solutions. We use tools like Slack to communicate effectively, and occasionally hang out with each other outside of work (pandemic pending). We love to learn and enjoy solving the hard problems.
Duties and Responsibilities
You will be a leader on our Data & Insights team working closely with the heads of Product, Data Platform Engineering, Analytics, and Data Science to execute on our data strategy. In this role, you will play an important role developing and executing a plan for overall data architecture, data quality, and data governance. Your duties will include:
- Design a fully robust, testable data pipeline.
- Evaluate build vs buy for various data technologies.
- Help architect and select the right tools and technologies to provide data movement and organization that is well organized, secure, and performant.
- Take complete ownership of the data quality for the data feeds built and have a passion for high-quality data.
- Work with the latest approaches to creating physical and virtual data layers, for example, Apache Spark, Redshift/Snowflake Data Lake, S3, etc.
- As the most senior data engineer you will be playing a key role in building the data pipeline and mentoring junior engineers.
- Be thought leader for engineering teams to support large-scale, distributed systems.
- Always be on the lookout to automate and improve existing data processes for quicker turnaround and high productivity.
- Positive outlook towards work, strong work ethic and ability to work in a team environment.
- At least 8 years of experience building and designing scalable and robust data pipelines.
- History of designing, building and launching extremely efficient & reliable data pipelines to move data (both large and small amounts) throughout Data Warehouses.
- At least 10 years of experience as a data engineer, data architect, data modeler, etc. designing and implementing data systems including data warehouses, operational data stores, and long-term storage mechanisms.
- Experience developing distributed applications using Kafka, Kinesis, and or Pub/Sub.
- Experience with modern cloud data warehouses (ex: Redshift, Snowflake)
- Experience with CI/CD pipelines.
- Experience with Python
Bonus Points for:
- Experience with kubernetes, terraform, airflow
- Experience building customer facing data solutions (in house or third party like Tableau, Looker, etc.)
- Experience in payments (Good domain knowledge on financial services industry)
- Data governance experience is preferred.
- Experience with or understanding of model development lifecycle: feature engineering, model development and deployment, orchestration, feedback and monitoring
- Apache Spark
- Various Database Technologies (Columnar, NoSQL, Relational, Cache, Time Series, etc)
- Experience with Hadoop, Statistics, Data Science
- Work alongside data engineers, data analysts, and data scientists to further shared knowledge and learn about industry leading techniques and strategies
- Be an early member of a data first team focused on analytics in a creative startup - we’re growing and your career and opportunities with us will, too!
- Use modern tools and languages to further your career growth
- Rich suite of benefit plans - Employee premiums paid 100%
- Market-leading fully paid Parental Leave
- Retirement savings - 401k plan with a Company match
- Meaningful Equity
- Bi-annual Hack Weeks to support and reward innovation
- Beautiful downtown Oakland office in a great location, with stunning views of Lake Merritt
- Conveniently located close to public transportation
- Open, transparent culture that includes weekly All Hands meetings, Lunch-and-Learns, all-company offsite, etc.
- Commuter and Parking monthly subsidy
- Access to corporate gym membership rates and other discounts and employee perks!
- Fully stocked kitchen, catered lunches twice a week, breakfast on Fridays, and more!
Back to top