Skip to main contentA logo with &quat;the muse&quat; in dark blue text.

AIML - ML Engineer, Safety Human Evaluation

AT Apple
Apple

AIML - ML Engineer, Safety Human Evaluation

Cambridge, MA

Join Us in Shaping the Future of Generative AI at Apple! Would you like to play a part in building the next generation of generative AI applications at Apple? We are looking for Machine Learning Engineers to work on ambitious projects that will impact the future of Apple, our products, and the broader world. This role is directed at assessing, quantifying, and improving the safety and inclusivity of Apple's Generative-AI powered features and products. In this role you'll have the opportunity to tackle innovative problems in machine learning, particularly focused on large language models for text generation, diffusion models for image generation, and mixed model systems for multimodal applications. As a member of the Apple HCMI/Responsible AI group, you will be working on Apple's generative models that will power a wide array of new features, as well as longer term research in the generative AI space. Our team is currently interested in large generative models for vision and language, with particular interest on Responsible AI, safety, fairness, robustness, explainability, and uncertainty in models.

Want more jobs like this?

Get Data and Analytics jobs in Cambridge, MA delivered to your inbox every week.

By signing up, you agree to our Terms of Service & Privacy Policy.


Description

Apple Intelligence is powered by thoughtful data sampling, creation, and curation; high quality, detailed annotations; and application of these data to evaluate and mitigate safety concerns of new generative AI features. This role heavily draws on applied data science, scientific investigation and interpretation, cross-functional communication and collaboration, and metrics reporting and presentation to stakeholders & decision-makers. Responsibilities include: - Develop metrics for evaluation of safety and fairness risks inherent to generative models and Gen-AI features - Design datasets, identify data needs, and work on creative solutions, scaling and expanding data coverage through human and synthetic generation methods - Develop sampling strategies and combine human annotation with auto-grading to deliver high-quality, high-confidence insights at a fast pace and large scale - Use and implement data pipelines, and collaborate cross-functionally to execute end-to-end safety evaluations - Distill project findings into recommendations for product engineering teams and safety policy development - Develop ML-based enhancements to red teaming, model evaluation, and other processes to improve the quality of Apple Intelligence's user-facing products - Work with highly-sensitive content with exposure to offensive and controversial content

Minimum Qualifications

  • MS or PhD in Computer Science, Linguistics, Cognitive Science, HCI, Psychology, Mathematics, Physics, or a similar science or technology field with a strong basis in scientific data collection and analysis + at least 4 years of relevant work experience, or BA/BS with 8+ years of relevant work experience
  • Experience gathering and analyzing language data, image data, and/or multi-modal data, including LLM-generated data
  • Strong experience designing human annotation projects, writing guidelines, and dealing with highly multi-labeled, nuanced, and often conflicting data
  • Proficiency in data science, machine learning, analytics, and programming with Python & Pandas; strong experience with one or more plotting & visualization libraries
  • Ability to collaborate with team members to prioritize competing projects, set and maintain a schedule for milestones and project completions, and communicate with all levels of team members as well as external stakeholders
  • Strong skills for rigorous model quality metrics development; interpretation of experiments and evaluations; and presentation to executives

Preferred Qualifications

  • Experience working in the Responsible AI space
  • Curiosity about fairness and bias in generative AI systems, and a strong desire to help make the technology more equitable
  • Prior scientific research and publication experience
  • Experience working with generative models for evaluation and/or product development, and up-to-date knowledge of common challenges and failures
  • Proven track record of contributing to diverse teams in a collaborative environment
  • A passion for building outstanding and innovative products. This position involves a wide variety of interdisciplinary skills

Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .

Submit Resume

Client-provided location(s): Cambridge, MA, USA
Job ID: apple-200592642
Employment Type: Other

Company Videos

Hear directly from employees about what it is like to work at Apple.