Data Engineer (Hadoop)
Since its founding in 2002, Big Fish has been the world’s largest producer and distributor of casual games. Our titles continuously land us at the top of app store charts. We’re bounding into the mid-core space, exploring new and exciting forms of gameplay, and producing more fun than ever! Big Fish is home to six individual studios, each with its unique style of gameplay. We’re united under a common goal: to produce and develop the very best in mobile gaming, and bring fun and entertainment to our millions of customers across the globe!
The Big Fish Business Intelligence Engineering Team is committed to continuous improvement, both personal and professional. We use agile practices and principles to improve our team and ensure we provide business value while maintaining an excellent work/life balance. We’re excited to be working in the big-data space, and we’re constantly learning new ways of processing and analyzing terabytes of information. To successfully pull this off, we focus on having a culture that is both fun and promotes learning so our team can meet new challenges and have a good time doing it. We want our team members to enjoy what they do from the achievement of acquiring a new set of skills all the way to the satisfaction of completing a project.
Big Fish is a data-driven company. We’re looking for skilled, passionate and curious individuals who want to have an impact on the way we work. We are seeking a Data Engineer who can work across a range of technologies and interact with stakeholders and partners across the company. The perfect candidate can apply a breadth of knowledge, deep technical skills, and strategic thinking to build custom data pipelines and business intelligence systems using massively parallel processing technologies.
Data Engineers at Big Fish work on a variety of platforms including Netezza, SQL Server, Hadoop, and Tableau to solve our company’s most significant data challenges and accomplish key strategic business initiatives. We’re looking for effective communicators that like to work in a collaborative team environment. You’ll have the opportunity to play multiple roles, learn lots of new technologies, have direct access to customers, and have a major influence on the business.
WHAT WE DO
Design, implement and maintain our data pipeline. Last year we built a real-time streaming pipeline that uses Spark to read data from Kafka, transform it and write it to HDFS.
- Write ELT code to transfer data from HDFS to relational databases to support business analysis.
- Collaborate with our team to build systems on which the business relies.
- Create automation using scripting technologies to make your job easier.
- Use third-party APIs to enrich our in-house data.
- Create proof of concept applications to demonstrate how our new technologies can add value to the business.
YOU MIGHT BE A GOOD FIT IF
You’re curious, motivated, comfortable with ambiguity, play well with others and can pick up new technologies quickly.
You have experience with or interest in:
- The Hadoop ecosystem: Spark, Kafka, Hive, HiveQL, Map Reduce, Yarn, HDFS, etc..
- Working in Linux environments.
- Test Driven Development.
- Writing code using Scala and Java.
- Driving up the quality of your team’s code through reviews and feedback.
- Experimenting - you’ll be dealing with emerging technologies for which no “best practices” exist.
- Sharing your knowledge with the rest of the team.
- Getting things done using Scrum.
- Learning, learning, and learning.
Want to discover more about life at Big Fish? Check us out on TheMuse.com!
Meet Some of Big Fish Games's Employees
Senior Game Producer
Meg works to assemble and develop a talented mid-core team with cohesive skills sets to create Big Fish Games titles together far into the future.
Back to top