Big Data Engineer

    • Bangalore, India

Job Description
Plans, designs, develops and tests software systems or applications for software enhancements and new products including cloudbased or internetrelated tools. Analyzes requirements, tests and integrates application components; Ensure the system improvements are successfully implemented. Drives unit test automation. Be well versed in the latest development methodologies like Agile, Scrum, DevOps and test driven development. Should also enable solutions that take into account APIs, security, scalability, manageability, usability, and other critical factors that contribute to complete solutions. Usually holds an academic degree in Computer Science, Computer Engineering or Computational Science.
Qualifications

Navigate the Hadoop Ecosystem and know how to leverage or optimize the use of what Hadoop has to offer
- Hadoop development, debugging, and implementation of workflows and common algorithms
- Apache Hadoop and data ETL (extract, transform, load), ingestion, and processing with Hadoop tools
- Knowledge of building a scalable and integrated Data Lake for an Enterprise
- Use the HDFS architecture, including how HDFS implements file sizes, block sizes, and block abstraction. Understand default replication values and storage requirements for replication. Determine how HDFS stores, reads, and writes files.
- Analyze the order of operations in a MapReduce job, how data moves from place to place, how partition and combiners function, and the sort and shuffle process
- Analyze and determine which of Hadoop's data types for keys and values are appropriate for a job. Understand common key and value types in the MapReduce framework and the interfaces they implement
- Organizing data into tables, performing transformations, performance tuning and simplifying complex queries with Hive and Impala
- How to pick the best tool for a given task in Hadoop, achieve interoperability, and manage recurring workflows
- Strong programing skills in Java or Python
- Working Knowledge of data ingestion using spark for supporting various file types like Json, Xml, Csv and databases
- Hands on development experience to extract the data from various sources like SFTP, Amazon S3 and other cloud data sources
- Designing optimal HBase schemas for efficient data storage and ingestion to HBase using the native API
- Knowledge of Kafka, Spark Streaming and stream data loads types and techniques
- Strong Sql and Data Analysis Skills
- Strong Shell Script or any other scripting language
Inside this Business Group
Intel's Information Technology Group (IT) designs, deploys and supports the information technology architecture and hardware/software applications for Intel. This includes the LAN, WAN, telephony, data centers, client PCs, backup and restore, and enterprise applications. IT is also responsible for e-Commerce development, data hosting and delivery of Web content and services.

Legal Disclaimer:
Intel prohibits discrimination based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.
It has come to our notice that some people have received fake job interview letters ostensibly issued by Intel, inviting them to attend interviews in Intel's offices for various positions and further requiring them to deposit money to be eligible for the interviews. We wish to bring to your notice that these letters are not issued by Intel or any of its authorized representatives. Hiring at Intel is based purely on merit and Intel does not ask or require candidates to deposit any money. We would urge people interested in working for Intel, to apply directly at www.jobs.intel.com and not fall prey to unscrupulous elements.

IN Experienced Hire JR0126123 Bangalore

Intel creates world-changing technology that enriches the lives of every person on earth.

Intel Company Image


Back to top