Senior-Big Data Software Engineer
- Plano, TX
DUTIES: Development of high performance, distributedcomputing tasks using big data technologies such as Hadoop, NoSQL, text miningand other distributed environment technologies. Understand how to applytechnologies to solve big data problems and to develop innovative big datasolutions. Utilize bigdata programming languages and technology, write code,complete programming and documentation, and perform testing and debugging ofapplications. Analyze, design, program, debug and modify software enhancementsand/or new products used in distributed, large scale analytics andvisualization solutions. Interact with data scientists and industry experts tounderstand how data needs to be converted, loaded and presented. Work with theraw data, cleanses it and finally polishes it to the format where it can beconsumed by Data Scientists to create critical insights. Assist with ad-hocrequests coming from business partners. Continuously improve CI/CD tools, processesand procedures. Develop and maintain build, deployment, and continuousintegration systems. Integrate data from multiple data sources. Develop nearreal-time data ingestion and stream-analytic solutions leveraging technologiessuch as Apache Spark, Kafka, Flume, Python, Hive, Cassandra, MongoDB and HDFS.Utilize Scala, Java, Map/Reduce, high performance tuning, machine learningmethods for classification and deep learning for pattern recognition. Utilizegit, GitHub, Jenkins, and Artifactory. Develop UDFs and UDAFs functions usingHive or Apache Phoenix. Work in cloud computing environments. Perform analysis,implementation, and performance tuning for engineered artifacts. Exercisejudgment on how to effectively communicate highly technical and complex detailsthrough the use of visualization and careful selection of "knowns" versus"hypotheticals". Build framework for businesses to store and retrievemechanism. Resolve technical issues through debugging and investigation.Utilize Scala, Java 1.8, Java, J2EE, Spring, Spring Boot, Spring JPA, Oracle,Apache Spark, Spark SQL, Hive, Sqoop, Apache Kafka, HDFS, Apache Flume, HDP2.5, AWS, AWS S3, REST API, Junit, Jenkins 2.0, Git, JIRA, Tomcat, Apache Solr,ActiveMQ, Maven, SBT, Gradle, Eclipse, InteliJ, and Docker. Utilize Python,Source Tree, and Putty.
REQUIREMENTS:Requires a Bachelor's degree,or foreign equivalent degree in Computer Engineering and five(5) years of progressive, post-baccalaureate experience in the job offered orfive (5) years of progressive, post-baccalaureate experience building frameworkfor businesses to store and retrieve mechanism; resolving technical issuesthrough debugging and investigation; utilizing Java 1.8, Java, J2EE, Spring,Spring Boot, Spring JPA, Oracle, Apache Spark, Spark SQL, Hive, Sqoop, ApacheKafka, HDFS, Apache Flume, HDP 2.5, AWS, AWS S3, REST API, Junit, Jenkins 2.0,Git, JIRA, Tomcat, Apache Solr, ActiveMQ, Maven, SBT, Gradle, Eclipse, InteliJ,and Docker; utilizing Python, Source Tree, and Putty.
AT&Tis an Affirmative Action/Equal Opportunity Employer, and we are committed tohiring a diverse and talented workforce.EOE/AA/M/F/D/V np
Back to top