PhD Research Intern, Deep Learning (Fall 2019)

At Lyft, community is what we are and it’s what we do. It’s what makes us different. To create the best ride for all, we start in our own community by creating an open, inclusive, and diverse organization where all team members are recognized for what they bring.

Lyft’s mission has been to improve people’s lives with the world’s best transportation. And self-driving cars are critical to that mission: they can make our streets safer, cities greener, and traffic a thing of the past. That’s why we started Level 5, our self-driving division, where we’re building a self-driving system to operate on the Lyft network.

Level 5 is looking for creative problem solvers to join us to develop the leading self-driving system for ridesharing. Our team members come from diverse backgrounds and areas of expertise, and each has the opportunity to have meaningful improvement on the future of our technology. Our world-class software and hardware experts work in brand new garages and labs in Palo Alto, California, and offices in London, England and Munich, Germany. And we’re moving at an incredible pace and servicing employee rides in our test vehicles on the Lyft app. Learn more at

As part of the Autonomy Group, you will collaborate with software engineers to tackle advanced AI challenges. Eventually we expect all Autonomy  members to work on a variety of problems across the autonomy space; however, on the Perception team, your work will initially involve interpreting sensor data from multiple modalities into a model of the world. For this position, we are looking for a PhD Research intern with the passion for applied deep learning within autonomous vehicles and a strong level of expertise in computer vision and machine learning.

  • Work on core perception algorithms such as sensor calibration, object detection, tracking, segmentation, and state space estimation.
  • Implement state-of-the-art algorithms based on latest publications in Computer Vision, Perception, and Machine Learning.
  • Conceive novel deep learning algorithms to improve the Perception stack and publish groundbreaking work.
  • Evaluate the performance of the perception stack and track it over time.
  • You will produce production-quality Python or C++ code
 Experience & Skills:
  • Experience with research communities, including having published papers (being listed as author) at conferences (e.g. NeuraIPS, ICML, CVPR, ECCV/ICCV, etc).
  • Machine Learning, Deep Neural Networks; applying ML and CNN/DNN techniques to handle different tasks related to perception.
  • Build machine learning applications using a broad range of tools such as decision trees, Hidden Markov Models, deep neural networks, etc.
  • Experience with deep learning frameworks such as TensorFlow, PyTorch, or Caffe
  • Collaborate with internal partners and openness to new / different ideas
Minimum Requirements:
  • Pursuing a PhD degree or higher in Computer Science, Electrical Engineering, or a related field and returning to a degree program after completion of the internship
  • Available for a 4-6 month internship between September 1, 2019 and April 30th, 2020
  • Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment

Lyft is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Lyft does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender-identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Lyft also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. Pursuant to the San Francisco Fair Chance Ordinance and other similar state laws and local ordinances, and its internal policy, Lyft will also consider for employment qualified applicants with arrest and conviction records.


Back to top