Skip to main contentA logo with &quat;the muse&quat; in dark blue text.

Staff Research Scientist, Applied Machine Learning Security (Agent Systems)

Yesterday Cupertino, CA

At Apple, we believe privacy is a fundamental human right. Our Security Engineering & Architecture (SEAR) organization is at the forefront of protecting billions of users worldwide, building security into every product, service, and experience we create.

The SEAR ML Security Engineering team combines cutting-edge machine learning with world-class security engineering to defend against evolving threats at unprecedented scale. We're responsible for developing intelligent security systems for Apple Intelligence that protect Apple's ecosystem while preserving the privacy our users expect and deserve.

We're seeking a staff-level ML Security Research Scientist who operates at the intersection of applied research and production impact. You'll lead original security research on agentic ML systems deployed at scale-driving secure agentic design directly into shipping products, identifying real vulnerabilities in tool-using models and designing adversarial evaluations that reflect actual attacker behavior. You'll work at the boundary between research, platform engineering, and product security, translating findings into architectural decisions, launch requirements, and long-term hardening strategies that protect billions of users. Your impact will be measured by risk reduction in production systems that ship.

Description

This role focuses on applied security research for production ML systems, with an emphasis on agentic and tool-using models deployed at scale. You will lead research efforts that surface real security risks in shipped or near-shipped systems, and you will drive mitigations that integrate cleanly into Apple's ML platforms and products.

You will operate at the boundary between research, platform engineering, and product security, conducting original research grounded in real system behavior and translating it into concrete design changes, launch requirements, and long-term hardening strategies. Impact is measured by risk reduction in production, not theoretical results alone.

","responsibilities":"Lead applied research on production agent systems: Conduct original security research on deployed agentic ML systems that interact with tools, APIs, memory, workflows, and sensitive data. Identify and characterize vulnerabilities such as indirect prompt injection, tool misuse, privilege escalation, goal hijacking, and cross-context data leakage, and develop defenses validated under production constraints.

Design realistic adversarial evaluations: Build and maintain adversarial testing frameworks that reflect real attacker incentives and system complexity, including multi-step, cross-tool, and persistence-based attacks that surface failure modes missed by standard evaluations.

Drive defenses into shipping systems: Develop mitigations that are compatible with production requirements around latency, reliability, debuggability, and privacy. Influence architectural choices such as capability scoping, isolation boundaries, execution control, and runtime enforcement.

Own threat models for agent deployments: Define trust boundaries and threat models for agentic ML across Apple platforms and services, and translate them into actionable security requirements and release criteria.

Bridge research and engineering: Partner deeply with ML platform teams, product engineering, and product security to ensure research insights become design guidance, test infrastructure, and launch blockers where appropriate.

Provide technical leadership: Set standards for applied ML security research, mentor other researchers, and influence how agent systems are reviewed, built, and released across the organization.

Preferred Qualifications

Experience researching or securing LLM-based or tool-augmented ML systems.

Ability to work fluidly across research, engineering, and security review processes.

Want more jobs like this?

Get jobs in Cupertino, CA delivered to your inbox every week.

Job alert subscription


Track record of influencing production systems through research-driven insights.

Publications in top venues are a plus, but production impact is the primary signal.

Minimum Qualifications

Ph.D. or equivalent experience in machine learning, security, systems, or a related field.

Demonstrated experience in applied ML security, adversarial ML, or systems security with real-world impact.

Strong experimental and engineering skills, with an emphasis on reproducibility and operational relevance.

Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .

Client-provided location(s): Cupertino, CA
Job ID: apple-200642546-0836_rxr-660
Employment Type: OTHER
Posted: 2026-01-24T19:20:29

Perks and Benefits

  • Health and Wellness

    • Parental Benefits

      • Work Flexibility

        • Office Life and Perks

          • Vacation and Time Off

            • Financial and Retirement

              • Professional Development

                • Diversity and Inclusion

                  Company Videos

                  Hear directly from employees about what it is like to work at Apple.