Big Data-Hadoop Administrator

Job Title
Big Data-Hadoop Administrator

Requisition Number
R2142 Big Data-Hadoop Administrator (Open)

Location
Glendale, Arizona

Additional Locations

Job Information

Provide administration, setup, design and production support for the Hadoop echo system. Work with other members of the team to ensure Operational stability, operational performance and excellence are maintained. Train the developers and others in the team on existing and new concepts of the echo system. Patch the system regularly and work with the Unix teams to ensure OS patching is kept in sync. Support code deployment and data movements and ensure the customer base is kept up to date on changes and provide resolution to their issues in a timely manner.

What will you do :

Manage and maintain over 15 different environments in the Bigdata space

Provide daily updates to the management on issues and overall health of the system

Support the project teams on their deliverables

Support the monthly security patching efforts

Provide support to the Development teams and provide KT to them on an ongoing basis

Train the junior admins and establish a 24x7 rotation model to support the critical applications

Be able to take care of any corresponding work related to the job function

Required:

• Typically has 9 or more years of consulting and/or industry experience

• Ability to support project work that varies from avg to complex in size

• Ability to work on multiple work packages for various teams

• Excellent teamwork and interpersonal skills

• Professional oral and written communication skills

• Ability to mentor and manage junior staff and further their professional growth

• Ability to obtain and maintain the required clearance for this role

• Red Hat Enterprise Linux 6 administration experience

• Working Knowledge of configuration management tool Chef

• Extensive Open source Big Data platform administration experience, to include the following:

• Hadoop

• HDFS

• HBase

• Map Reduce

• Yarn

• Spark

• Pig

• Hive

• Storm

• R

• Zepplin

• Kafka

• ZooKeeper

Required:

• Bachelor's Degree

Preferred:

• Prior professional services or consulting experience
Responsible for implementation and ongoing administration of Hadoop infrastructure.
Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.
Cluster maintenance as well as creation and removal of nodes
Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
Screen Hadoop cluster job performances and capacity planning
Monitor Hadoop cluster connectivity and security
Manage and review Hadoop log files.
File system management and monitoring.
HDFS support and maintenance.
Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
Point of Contact for Vendor escalation


Meet Some of CSAA Insurance Group's Employees

Jose C.

Business Consultant

Jose is in charge of understanding every disruption that takes place in the industry, then formulating a plan to continue helping clients succeed in spite of those issues.

Barbara C.

Disaster Recovery Specialist

Barbara focuses on preparing both tools to combat cyber attacks from hackers and backup and recovery programs that can be applied in the event of a large-scale technology disaster.


Back to top