Cloud Infrastructure Spark Software Engineer
- Cupertino, CA
Posted: Oct 28, 2020
Role Number: 200202021
Seeking extraordinary software engineers with deep experience in scalable data processing systems. You will have a passion to push the limits of distributed computing frameworks to get every ounce of performance out of them. Looking for engineers with In depth knowledge of systems like Spark, Flint, Storm, and other existing frameworks. Also someone who will be passionate by the prospect of working reciprocally with other groups internal to Apple and also communities outside Apple.
- A successful track-record or demonstrated aptitude as an engineer who has worked on distributed systems.
- Experienced Engineer or Contributor or Committer to Spark and/or related technologies with:
- Good knowledge of Apache Spark internals (Catalyst, Tungsten and related query engine details).
- Good knowledge of data formats like Parquet, ORC internals.
- Experience working on or building connectors from Spark to any of the data sources.
- Knowledge of Yarn, Mesos, Kubernetes or other compute substrate.
- Knowledge of consensus management systems like Zookeeper and caching solutions.
We are team who provide Apache Spark as a service to many Apple internal organizations like iCloud, Maps, iTunes, Business intelligence and others. This service forms the basic platform upon which most of the Apple's data engineering and data science systems and use-cases are built. The Apple Cloud Infrastructure Spark Engineering team needs a strong self driven engineer who can work both on open source Spark community as well as internal teams to take Spark and related technologies to next level.
Education & Experience
BS, MS, or PhD degree in computer science or equivalent.
Back to top