Senior-Big Data Software Engineer

    • Plano, TX


DUTIES: Development of high performance, distributed
computing tasks using big data technologies such as Hadoop, NoSQL, text mining
and other distributed environment technologies. Understand how to apply
technologies to solve big data problems and to develop innovative big data
solutions. Utilize bigdata programming languages and technology, write code,
complete programming and documentation, and perform testing and debugging of
applications. Analyze, design, program, debug and modify software enhancements
and/or new products used in distributed, large scale analytics and
visualization solutions. Interact with data scientists and industry experts to
understand how data needs to be converted, loaded and presented. Work with the
raw data, cleanses it and finally polishes it to the format where it can be
consumed by Data Scientists to create critical insights. Assist with ad-hoc
requests coming from business partners. Continuously improve CI/CD tools, processes
and procedures. Develop and maintain build, deployment, and continuous
integration systems. Integrate data from multiple data sources. Develop near
real-time data ingestion and stream-analytic solutions leveraging technologies
such as Apache Spark, Kafka, Flume, Python, Hive, Cassandra, MongoDB and HDFS.
Utilize Scala, Java, Map/Reduce, high performance tuning, machine learning
methods for classification and deep learning for pattern recognition. Utilize
git, GitHub, Jenkins, and Artifactory. Develop UDFs and UDAFs functions using
Hive or Apache Phoenix. Work in cloud computing environments. Perform analysis,
implementation, and performance tuning for engineered artifacts. Exercise
judgment on how to effectively communicate highly technical and complex details
through the use of visualization and careful selection of "knowns" versus
"hypotheticals". Build framework for businesses to store and retrieve
mechanism. Resolve technical issues through debugging and investigation.
Utilize Scala, Java 1.8, Java, J2EE, Spring, Spring Boot, Spring JPA, Oracle,
Apache Spark, Spark SQL, Hive, Sqoop, Apache Kafka, HDFS, Apache Flume, HDP
2.5, AWS, AWS S3, REST API, Junit, Jenkins 2.0, Git, JIRA, Tomcat, Apache Solr,
ActiveMQ, Maven, SBT, Gradle, Eclipse, InteliJ, and Docker. Utilize Python,
Source Tree, and Putty.

REQUIREMENTS: Requires a Bachelor's degree,
or foreign equivalent degree in Computer Engineering and five
(5) years of progressive, post-baccalaureate experience in the job offered or
five (5) years of progressive, post-baccalaureate experience building framework
for businesses to store and retrieve mechanism; resolving technical issues
through debugging and investigation; utilizing Java 1.8, Java, J2EE, Spring,
Spring Boot, Spring JPA, Oracle, Apache Spark, Spark SQL, Hive, Sqoop, Apache
Kafka, HDFS, Apache Flume, HDP 2.5, AWS, AWS S3, REST API, Junit, Jenkins 2.0,
Git, JIRA, Tomcat, Apache Solr, ActiveMQ, Maven, SBT, Gradle, Eclipse, InteliJ,
and Docker; utilizing Python, Source Tree, and Putty.

AT&T
is an Affirmative Action/Equal Opportunity Employer, and we are committed to
hiring a diverse and talented workforce. EOE/AA/M/F/D/V np


Back to top