Generative AI Applied Scientist, SIML - ISE
Apple's System Intelligence and Machine Learning (SIML) team is seeking a senior Generative AI expert to pioneer the next generation of human-centric device interaction and multimodal scene understanding. You will be at the core of our efforts to develop multimodal LLMs that can perceive and understand complex scenes and nuanced human interactions, behaviors, and preferences. This is a unique opportunity to join a leading applied research group known for its foundational contributions to Apple Intelligence, where you will focus on the end-to-end lifecycle of generative models-from novel architecture design and large-scale training to final deployment.
Description
We are looking for a senior applied scientist with strong ML and Generative modeling skills who can design, train, and deploy multimodal GenAI technology. You will need to learn quickly and implement and demonstrate new user experiences using large foundation models. You will build novel and innovative technology, forge collaborations with cross-functional partners, and adapt and iterate your solutions in a dynamic environment.
You will be expected to advance human interaction and scene understanding modeling across various fronts, from a system level to a core ML algorithm level. Some of the myriad challenges include understanding user behavior and preferences from interactions with the device and the environment, retrieving useful and nuanced information based on past interactions, handling deeply interleaved streaming inputs, reasoning over varying temporal contexts, developing memory systems to enable long-term adaptation, and generating semantically rich internal representations to enable open-ended downstream tasks.
You will be responsible for delivering ML models and solutions that can readily be adopted in production pipelines, such as APIs for production-ready ML models and algorithms well-integrated into our training infrastructure.","responsibilities":"Design, train, and deploy large-scale multimodal LLMs, owning the entire lifecycle from initial architecture to final deployment within the constraints
Want more jobs like this?
Get jobs in Cupertino, CA delivered to your inbox every week.

Implement and demonstrate novel, human-centric user experiences by applying the capabilities of large foundation models
Create robust, scalable ML models and APIs that can be well-integrated into Apple's production pipelines and training infrastructure
Work closely with partner teams to build, iterate, and adapt innovative solutions in a dynamic, product-focused environment
Preferred Qualifications
Proven track record of deploying innovative ML technologies in production
Familiarity with developing ML for resource-constrained devices
Experience working with large cross-functional and diverse teams
Minimum Qualifications
PhD or Masters Degree in Computer Science, Engineering, or a related field with a focus on machine learning; or equivalent experience
Strong research skills with first author publications in top tier ML conferences
Expert-level knowledge of SOTA in large auto-regressive transformer models, multi-modal encoders, and representation learning
Experience with multimodal large language models (LLMs)
Strong programming skills in Python, maintaining ML code bases grounded in software engineering principles
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .
Perks and Benefits
Health and Wellness
Parental Benefits
Work Flexibility
Office Life and Perks
Vacation and Time Off
Financial and Retirement
Professional Development
Diversity and Inclusion
Company Videos
Hear directly from employees about what it is like to work at Apple.