Data Engineers (Graduates to Senior)
Rakuten Rewards is a rapidly expanding company and in 2017, members spent over $10 billion on Rakuten Rewards websites and since the inception of the company, have earned nearly $1 billion in cash back. During the peak days of 2018, over 8% of U.S. online shopping went through Ebates!.
This is a tremendous opportunity for Data Engineers at all levels to join a fast-growing team at a fast-growing company!
- Share the ownership of the production data platform and ensure high stability and availability
- Build tools to automate the monitoring or work load and take proactive measure to scale the platform or to fix the problem
- Extend our source control management system to help increase engineering productivity and ensure its scalability
- Improve our build tools to make sure our build release process more efficient and reliable
- Own the deployment tools to make sure we can smoothly ship code to production any time.
- Help troubleshoot issues in production Hadoop clusters.
- Design and build our Big Data pipeline that can transfer and process several terabytes of data using Apache Spark, Apache Kafka, Hive and Impala
- Persuasive communicator
- Form strong partnership with product managers and stakeholders
- Willingness to mentor and delegate work to promote growth on the team
- Strong orientation towards data and use data to drive decisions
- Understands how to break work down into concise deliverables with a focus on iterative product delivery
- Become intimately familiar with systems he/she builds and pride in writing maintainable code
- Contribute to weekly product releases, sprint planning, and code reviews
- Design, develop, deploy and maintain data applications to address business issues
- Provide architectural blueprints and technical leadership
- Collaborate with peer organizations, quality assurance and end users to produce cutting edge software solutions
- Interpret business requirements to articulate the business needs to be addressed
- Troubleshoot code level problems quickly and efficiently
Requirements (Intermediate to Senior Engineers )
- Strong fundamental knowledge of internals of Big data stack – Hadoop, Spark, NoSQL, Kafka
- Experience of working on the bigdata stack in the cloud (e.g. EMR, S3, Glue, Redshift, Looker)
- SQL and NoSQL technologies
- Hands on skills on Scala and/or Python programming
- Experience in designing and building large-scale data applications and data pipelines in production
- Must be self-organized and focused on continues improvements of the platform and the team
- Must be a self-starter and a team player with great communication skills
- Highly motivated to add value to the team and platform using innovations around data and data applications
Nice to have skills;
- Experience in optimizing HDFS, Spark, Yarn, Impala and Hive.
- Experience in automating the deployment of bigdata applications on-premise Hadoop cluster and on Cloud
- Experience with tools and technologies like Gradle, Maven, Jenkins, Airflow, git, IntelliJ, Eclipse, Docker to support end to end software development
- Hands-on Unix/Linux knowledge
- Experience in Snowflake
- Experience in tools like Tableau, Amplitude
Graduate position requirements:
Java, Python or Scala
We value transparency, collaboration, empathy, quality, and shipping. Candidates that have similar values will excel at Ebates.
Back to top