Software Engineer - Big Data (All Levels)
The Rubicon Project is looking for talented, passionate, experienced, and entrepreneurial engineers in big data. You will be responsible for the design, development, and operation of a multi-terabyte data processing framework in an agile environment using Hadoop/Pig/HBase/Greenplum/Java. This role requires the candidate to possess, or have a willingness to acquire, skills in large-volume data analysis with Hadoop/NoSQL technologies.
- 4+ years of overall software development experience with a minimum of 2 years of core java backend technologies.
- Great interpersonal, written and verbal communication skills; including the ability to create technical specifications, debate technical tradeoffs, and explain technical concepts to business users
- A strong understanding of algorithms and data structures, and their performance characteristics
- Proficiency in working and developing on Linux
- Experience supporting operations teams with deployments and debugging production issues.
- Experience responding to feature requests, bug reports, performance issues and ad-hoc questions
- Bachelor’s degree in Computer Science, Mathematics, Engineering (or equivalent professional experience)
- 1+years of experience working with Hadoop/Pig/Hive/Hbase/MapReduce with Java.
- Interest in machine learning, data mining, and analytics
- Experience in building back-end systems for an Internet startup ad technology company
- Proficient in Agile development, and able to integrate tightly with business and operations teams
- Experience with automated testing (TDD, Mocking, Unit/Functional/Integration)
- Experience with development continuous integration tools like maven, bamboo, git, Jenkins, Crucible, etc.
- Experience with Cloud Computing basic systems administration
- Experience with NoSQL data stores
Back to top