CS Data Integration Engineer

Founded in August of 2008 and based in San Francisco, California, Airbnb is a trusted community marketplace for people to list, discover, and book unique travel experiences around the world. Whether an apartment for a night, a castle for a week, or a villa for a month, Airbnb allows people to Belong Anywhere through unique travel experiences at any price point, in more than 34,000 cities and over 190 countries. We promote a culture of curiosity, humanity, and creativity through our product, brand, and, most importantly, our people.
 

Role & Responsibilities

  • Conduct ETL, API integrations and scale systems that supports voice transcription and workforce management
  • Build and manage data processing pipelines in both batch and streaming data within enterprise and open source technology platforms
  • Define data architecture and design in consultation with analysts, data scientists, engineers and business stakeholders
  • Integrate voice, text and other data sources that will contribute to the solution
  • Support deployment Engineers during implementation of the system design
  • Use latest advances in large scale data processing that could support analytics, machine learning and data visualization
  • Leverage real-time streaming infrastructure to enable teams and various systems to move quickly, getting accurate data with minimal delay
  • Design solutions to process, analyze, and store data for applications that are continually growing in scale, while continually optimizing for security and speed

Requirements

  • Bachelors with 5 years or Masters with 3 years of related experience
  • Strong fundamentals in computer science and software engineering
  • Experience with customer service or customer operations data a huge plus
  • Strong ability with linux/unix administration and bash scripting
  • Experience with Java / Scala is preferred
  • Experience with modern language like Ruby / Python / Bash
  • Strong working knowledge of relational databases and query authoring (SQL)
  • Experience in open source technologies like Kafka, Hadoop, Hive, Presto, and Spark
  • Rigor in high code quality, automated testing, and other engineering best practices
  • BS/MS/PhD in Computer Science and a high level of practical experience in a related field

Back to top