Are you excited about the impact that optimizing deep learning models can have on enabling transformative user experiences? The field of ML compression research continues to grow rapidly and new techniques to perform quantization, pruning etc are increasingly available to be ported and adopted by the ML developer community, that is looking to ship more models in a constrained memory budget and make them run faster. We are passionate about productizing and pushing the envelope of the state of the art of model optimization algorithms, to further compress and speed up the thousands of models shipping as part of Apple internal and external apps, running locally on millions of Apple devices. We are a team that collaborates heavily with researchers, ML software and hardware architecture teams and external/internal product teams shipping models on Apple devices. If you are excited about making a big impact and playing a critical role in growing the user base and driving the adoption of a relatively new library, this is a great opportunity for you. We are looking for someone who is highly self motivated and passionate about optimizing models for on device execution. If you have a proven track record of developing and working with the internals of an ML python library, writing high quality code and shipping software, we strongly encourage you to apply.
Want more jobs like this?
Get Software Engineering jobs in Cupertino, CA delivered to your inbox every week.
Description
We work on a python library that implements a variety of training time and post training quantization algorithms and provides them to developers as simple to use, turnkey APIs, and ensures that these optimizations work seamlessly with the Core ML inference stack and Apple hardware. Our algorithms are implemented using PyTorch. We optimize models across domains, including NLP, vision, text, generative models etc. In this role, the Model Optimization Engineer will be an expert in understanding the internal workings of PyTorch, graph capturing and graph editing mechanisms, methods to observe and modify intermediate activations and weights, tensor subclasses, custom ops, different types of parallelism for training models, and use this knowledge to implement and update the core infrastructure of the optimization library which enables an efficient and scalable implementation of various classes of compression algorithms. You'll also set up and debug training jobs, datasets, evaluation, performance benchmarking pipelines. Ability to ramp up quickly on new training code bases and run experiments. Run detailed experiments and ablation studies to profile algorithms on various models, tasks, across different model sizes. In this role, you will: - Design and develop the core infrastructure which powers the implementations of various compression algorithms (training time, post training, data free, calibration data based etc) - Implement the latest algorithms from research papers for model compression in the optimization library. - Collaborate with software and hardware engineers, from the ML compiler inference stack, to co-develop new compression operations, and model export flows for on device deployment. - Design clean, intuitive, maintainable APIs
Minimum Qualifications
- Bachelors in Computer Sciences, Engineering, or related discipline.
- 3+ years of industry and/or research experience
- Highly proficient in Python programming
- Proficiency in at least one ML authoring framework, such as PyTorch, TensorFlow, JAX, MLX
- Experience in the area of model compression and quantization techniques, specially in one of the optimization libraries for an ML framework (e.g. torch.ao).
Preferred Qualifications
- Demonstrated ability to design user friendly and maintainable APIs
- Experience in training, fine tuning, and optimizing neural network models
- Primary contributor to a model optimization/compression library.
- Self prioritize and adjust to changing priorities and asks
- Improving model optimization documentation, writing tutorials and guides
- Good communication skills, including ability to communicate with cross-functional audiences
Pay & Benefits
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $143,100 and $264,200, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .
Submit Resume