- Bachelor's degree in computer science, information systems, or closely related field
- 1-5 years of experience in a software engineering or IT infrastructure role
- Advanced knowledge of Hadoop stack, with prior experience in Hive, Pig, HBase, Impala and Sqoop
- Advanced knowledge of object oriented programming, distributed systems and software design principles
- Advanced knowledge of database maintenance and administration using MS SQL Server
- Strong programming experience in Java, Python and R
- Hands-on experience in Microsoft Azure or Amazon EC2 cloud platform
- Demonstrated ability to design and implement ETL workflows across both Windows and Linux environments
- Should be a highly motivated individual with the ability to work effectively with people across all levels in an organization
- Experienced in RDBMS systems like Oracle Database, IBM DB2, or MySQL
- Experienced in NoSQL systems like MongoDB, Redis, or Cassandra
- Experienced in Apache Spark
- Willingness to travel about 25% of the time
Who You'll Work With
You'll work in our San Jose, Costa Rica office and will be a part of our Knowledge Network. You will work with consultants, clients or internal teams to prepare complex data analyses and models that help solve client problems and deliver significant measurable impact.
Our global knowledge network includes more than 1,800 knowledge professionals who work alongside our consultants and clients to generate distinctive insights and proprietary knowledge. You will work with this group to help develop, codify, sanitize, and manage our global knowledge portal, which includes more than 50,000 documents that form the backbone of our firm's knowledge management.
What You'll Do
You will expand McKinsey's current analytics and machine learning capabilities, helping create new strategies and data tools within an innovative organization.
You will help shape the future of what data-driven organizations look like, creating new lines of thinking within a diverse range of clients and situations. You will constantly engage with our clients and team members as they architect new systems and strategies for extracting, transforming and optimizing data flows from complex, sometimes disparate, data sources.
You will work with our clients' entire data ecosystem to enable effective, highly scalable data analysis pipelines. This role will develop, implement and maintain distributed systems around all elements of data analysis with a constant eye toward continuous improvement.
Back to top