We'd Love to See:
Job Description
Develop, test and support new features in our entity resolution and analytics projects using Scala functional programming. Write new code in Scala in order to improve our identity resolution accuracy and functionality using Maven and GIT for versioning and code build. Write new Scala code with Spark and Hadoop and Map Reduce Framework for big data. Write new Java, Scala, and Python code to move the current product into microservice based framework using Kubernetes, Docker Container, Redis, RabbitMQ. Test the fully functional code and integrate it to work on cloud providers such as Amazon Webservices and Google Cloud Platform. Create new designs and write code to be run using GCP tools and frameworks such as Dataproc, BigTable, Cloud Composer, BigQuery, and GKE. Write new code to test the system's ability to meet its stated requirements. Collaborate with Quality Assurance (QA) on test cases. Collaborate with cross functional teams to convert the product and platform priorities to user stories in Jira and specify their completion criteria. Define and document processes that the Data Quality team will implement to ensure consistency of build processes. Coordinate sprint completion and acceptance activities. Plan and communicate product and platform releases to internal and external stakeholders. Troubleshoot and analyze issues brought up from our clients. Work with Unix and Linux to create and maintain applications. Set up and develop code to utilize Redis in-memory key-value store. Perform Hive and Spark queries of datasets in order to catch errors in identity resolution or data transformation processes. Work with clients to understand their usage patterns of data so that discrepancies can be identified.
Want more jobs like this?
Get Software Engineering jobs in San Diego, CA delivered to your inbox every week.
Job Requirements
Master's degree in Computer Science, Electrical Engineering, Information Systems, Computer Engineering or any Engineering or related field plus three years of experience in the job offered or as a Technical Analyst or writing functional programs in Scala language, and developing code in Spark-Core, Spark-SQL, and Hadoop Map Reduce Framework for big data batch and streaming applications required. Specific Requirements: Demonstrated expertise in: Scala functional programming for application development, developing microservice based framework using Kubernetes, Docker Container; converting Data Science machine learning model python code to Scala; designing and writing code for GCP offerings, mainly Cloud Composer, Dataproc, BigTable, BigQuery; setting up and developing code to utilize Redis in-memory key-value store; working with Unix and Linux to create and maintain applications. (Bachelor's degree in Computer Science, Electrical Engineering, Information Systems, Computer Engineering or any Engineering or related field plus five years of progressive experience in the job offered or as a Technical Analyst or writing functional programs in Scala language, and developing code in Spark-Core, Spark-SQL, and Hadoop Map Reduce Framework for big data batch and streaming applications also acceptable)