Apple could use machine learning to shore up LiDAR limitations in self-driving

Apple has a new paper published in Cornell’s arXiv open directory of scientific research, describing a method for using machine learning to translate the raw point cloud data gathered by LiDAR arrays into results that include detection of 3D objects, including bicycles and pedestrians, with no additional sensor data required.

The paper is one of the clearest looks yet we’ve had at Apple’s work on self-driving technology. We know Apple’s working on this because it’s had to admit as much in order to secure a self-driving test permit from the California Department of Motor Vehicles, and because its test car has been spotted in and around time.

At the same time, Apple has been opening up a bit more about its machine learning efforts, publishing papers to its own blog highlighting its research, and now also sharing with the broader research community. This kind of publication practice is often a key ingredient for top talent in the field, who hope to work with the broader community to advance ML tech in general.

This specific picture describes how Apple researchers, including paper authors Yin Zhou and Oncel Tuzel, created something called VoxelNet that can extrapolate and infer objects from a collection of points captured by a LiDAR array. Essentially, LiDAR works by creating a high-resolution map of individual points by emitting lasers at its surrounding and registering the reflected results.

The research is interesting because it could allow LiDAR to act much more effectively on its own in self-driving systems. Typically, the LiDAR sensor data is paired or ‘fused’ with info from optical cameras, radar and other sensors to create a complete picture and perform object detection; using LiDAR alone with a high degree of confidence could lead to future production and computing efficiencies in actual self-driving cars on the road.