EPAM Systems

Big Data Software Engineer

1 month agoKatowice, Poland

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential.

The remote option applies only to the Candidates who will be working from any location in Poland.

DESCRIPTION
Currently, we are looking for a Big Data Software Engineer to make the team even stronger.

ABOUT THE PROJECT
Building out the Data Management & Analytics platform to help Customer simplify data analytics and extract meaningful information out of the data and realize measurable value from it. Along with Predictive Modeling the applicable information mined from the raw data supports improvements in the quality of care and cost efficiency for the company. The ultimate goal of the program is building healthier individuals, healthier families and healthy communities through the analytics platform delivered by this team.
#LI-Remote
Responsibilities

  • Build big data pipelines for batch and real-time data processing
  • Develop, load and run predictive models in machine learning platforms
  • Build and run data analytics environments
  • Analyze existing ETLs and design a new solution based on Spark
  • Collect, process and cleanse data from a wide variety of sources. Transform and convert unstructured data set into structured data for algorithm input
  • Evaluate the effectiveness of user experiences, determining what data is needed and how to collect it
  • Integrate ML models into production
  • Design and use validation tools to compare results of the original and new solutions
  • Build and maintain a Hadoop or Spark cluster, together with the many other tools that are part of the ecosystem: databases (such as Hive and HBase), streaming data platforms (Kafka, Spark Streaming etc.)
Requirements
  • Experience in Java/Scala/Python
  • Experience in Apache Spark (Core, Streaming, SQL)
  • Experienced in Apache Hadoop Hive, HBase, HDFS
  • Experienced in CI/CD (Docker, Jenkins, Git)
  • Experienced in Various machine learning tools
  • Experienced in SQL and noSQL DBs
  • Ability to speak English in order to communicate with colleagues, department heads to discuss complex data-driven findings and technical specifications
Nice to have
  • Experience in REST API web-services
  • Experience in Kafka, IBM MQ
  • Experience in Linux
We offer
  • Vast opportunities for self-development: online courses and library, experience exchange with colleagues around the world, partial grant of certification
  • English language classes
  • Polish language classes for Foreigners
  • Career development center
  • Unlimited access to LinkedIn learning solutions
  • Possibility to relocate for short and long-term projects (ex. to USA or Switzerland)
  • Benefit package (private insurance, health care, multisport, lunch tickets, and shopping vouchers, etc.)
  • Possibility to be involved in an international project
  • Remote work options
  • Mentoring programs with experts that will help you to grow
  • Discount on Apple products up to 10%
  • Relocation package for foreign applicants as well as for people relocating within Poland
  • Please note that only selected candidates will be contacted
Apply

Job ID: EPAM-55164