Big Data Engineer

About Us
We are a full stack data science company and a wholly owned subsidiary of The Kroger Company. We own 10 Petabytes of data, and collect 35+ Terabytes of new data each week sourced from 62 Million households. As a member of our engineering team you will use various cutting edge technologies to develop applications that turn our data into actionable insights used to personalize the customer experience for shoppers at Kroger. We use agile development methodology, starting with Big Room Planning bringing everyone into the planning process to build scalable enterprise applications.


Data Developer – What you’ll do
As a data developer, we develop strategies and solutions to ingest, store and distribute our big data. Our developers use Scala, Hadoop, Spark, Hive, JSON, and SQL in 10 week long scrum teams to developer the products, tools and features.

Responsibilities
Take ownership of features and drive them to completion through all phases of the entire 84.51° SDLC. This includes external facing and internal applications as well as process improvement activities such as:
• Lead design of Hadoop and SQL based solutions
• Perform development of Hadoop and SQL based solutions
• Perform unit and integration testing
• Collaborate with senior resources to ensure consistent development practices
• Provide mentoring to junior resources
• Participate in retrospective reviews
• Participate in the estimation process for new work and releases
• Bring new perspectives to problems
• Be driven to improve yourself and the way things are done
Education

Requirements
• Bachelor’s degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another technically strong program.
• 5+ years proven ability of professional data development experience
• Strong understanding of Agile Principles (Scrum)
• Proficient with relational data modeling
• 5+ years proven ability of developing with SQL (Oracle, SQLServer)
• 5+ years proven ability of developing with Hadoop/HDFS
• Full understanding of ETL concepts
• Full understanding of data warehousing concepts
• Exposure to VCS (Git, SVN)
• 3+ year developing experience with either Java, Scala or Python
• Experience with Spark

  • Preferred Skills – Experience in the following
    • Exposure to NoSQL (Mongo, Cassandra)
    • SOA
    • Junit
    • CI/CD

 


Back to top