Senior Engineer, Data & Analytics

Nike is looking for a seasoned engineer who can lead and grow teams of data engineers and developers to support machine learning engineers and data scientists to deliver scalable machine learning and advanced analytics solutions to customers across our business. You will work on a variety of complex business problems such as forecasting, personalization, and inventory optimization. You will productionalize and scale solutions in the cloud as APIs, stream processing, or massive batch processing. You will leverage big data, parallel processing technologies, advanced analytics, machine learning, and deep learning techniques to quantitatively plan product demand, allocate resources, and target the right customers with the best products. You will foster partnerships with best of breed open source communities, commercial vendors, and universities. Above all, your work will accelerate Nike's core mission of serving Athletes*.

Primary Responsibilities:

• Deep understanding and application of modern data processing technology stacks like Spark, Hadoop ecosystem technologies, and others
• Collaborate with Data Scientists to design, architect & implement performant high-volume production forecast models
• Design reusable components, frameworks, libraries like User Defined Functions
• Support investigation of new cloud services, software packages/tools, APIs, container management, and distributed systems, to continuously deliver quality analytics and machine learning at scale.
• Experience with performance/scalability tuning, algorithms and computational complexity
• Build continuous integration and test-driven development environment
• Participate in an Agile / Scrum methodology to deliver high quality software releases
• Review code and provide feedback relative to best practices and improving performance
• Mentor and guide other software engineers within the team

Qualifications
• MS/BS degree in a computer science field or related discipline
• 5+ years' experience in large-scale software development
• 3+ year experience in Hadoop and its eco system
• Experience with AWS components and services, particularly, CloudFormation, IAM, ECS/EKS, EMR, S3, and Lambda/Serverless
• Strong Spark, Python, shell scripting and SQL
• Experience writing automation to deploy R (RStudio), Spark ML, and/or Python apps (pandas, numpy, scipy, etc.)
• Proficiency with R (RStudio), Spark ML, and/or Python (pandas, numpy, scipy, etc.) coding languages and libraries
• Experience with scheduling tools like Airflow
• Experience with software engineering best practices including unit testing, continuous integration, and source control (GitHub, Bitbucket)
• Experience with Container Orchestration (Kubernetes, Docker,etc)
• Experience with ETL development in Big data environment
• Previous experience and successful track-record of learning new tools and technologies
• Excellent written and verbal communication skills

Nice to have:
• Proficiency with data visualization tools and libraries like Tableau, Shiny, ggplot2
• Experience with messaging & event processing systems such as Kafka, Kinesis
• Strong understanding of statistical modeling, machine learning & deep learning techniques
• MS/BS degree in a computer science field or related discipline
• 5+ years' experience in large-scale software development
• 3+ year experience in Hadoop and its eco system
• Experience with AWS components and services, particularly, CloudFormation, IAM, ECS/EKS, EMR, S3, and Lambda/Serverless
• Strong Spark, Python, shell scripting and SQL
• Experience writing automation to deploy R (RStudio), Spark ML, and/or Python apps (pandas, numpy, scipy, etc.)
• Proficiency with R (RStudio), Spark ML, and/or Python (pandas, numpy, scipy, etc.) coding languages and libraries
• Experience with scheduling tools like Airflow
• Experience with software engineering best practices including unit testing, continuous integration, and source control (GitHub, Bitbucket)
• Experience with Container Orchestration (Kubernetes, Docker,etc)
• Experience with ETL development in Big data environment
• Previous experience and successful track-record of learning new tools and technologies
• Excellent written and verbal communication skills

Nice to have:
• Proficiency with data visualization tools and libraries like Tableau, Shiny, ggplot2
• Experience with messaging & event processing systems such as Kafka, Kinesis
• Strong understanding of statistical modeling, machine learning & deep learning techniques


Back to top