Part Three of A Three Part “How Autonomous Vehicles Work” Series
To make autonomous cars a commercial reality, automakers must equip vehicles with sensing technologies that can plan a path through a virtual map of the world. One sensor technology fundamental to self-driving cars is light detection and ranging (LiDAR), which provides high-resolution, three-dimensional information about the surrounding environment.
LiDAR can simultaneously locate the position of people and objects around the vehicle and assess the speed and route at which they are moving. Using that information, an on-board computer system can determine the safest way for a self-driving vehicle to drive to its destination.
To see how LiDAR works, check out this video:
Through his participation in the DARPA Grand Challenge, Velodyne LiDAR’s founder and CEO, Dave Hall, recognized LiDAR’s potential to advance development of autonomous vehicles. Unsatisfied with the limited 2D images that cameras could provide, Hall invented a spinning LiDAR to produce realtime 360 degree 3D maps of the surroundings. Whereas no team had previously been able to complete the Challenge, six out of the seven teams that finally succeeded in 2007 were utilizing LiDAR developed and supplied by Hall.
There are multiple technology approaches being advanced by LiDAR suppliers. In each of these approaches, the same fundamental performance metrics apply to determine if a LiDAR system can enable a fully autonomous car to operate successfully.
The top performance features to use in looking at LiDAR technologies are field of view, range, resolution, and rotation/frame rate. These are the capabilities needed to guide an autonomous vehicle reliably and safely through the complex set of driving circumstances that will be faced on the road.
Let’s review each feature and examine how it impacts a driverless car.
Field of View. It is widely accepted that a 360° horizontal field of view – something not possible for a human driver – is optimal for safe operation of autonomous vehicles. Having a wide horizontal field of view is particularly important in navigating the situations that occur in everyday driving.
For instance, we can consider the scenario of performing a high-speed merge onto a highway. The maneuver requires a view diagonally behind the autonomous vehicle to see if another car is coming in the adjacent lane. This also requires a view roughly perpendicular to where the vehicle is currently traveling to assess cars in the adjacent lane and confirm there is room to merge. Throughout this process, the vehicle must look forward so it can negotiate traffic ahead of it. For these reasons, a narrow field of view would be insufficient for the vehicle to safely execute the merge maneuver. Therefore, LiDAR sensors that rotate are optimal for these applications because one sensor is capable of capturing a full 360 degree view. In contrast, if an autonomous vehicle employs sensors with a more limited horizontal field of view, then more sensors are required and the vehicle’s computer system must then stitch together the data collected by these various sensors.
Vertical field of view is another area where it is important that LiDAR capabilities match real-life driving needs. LiDAR needs to see the road to recognize the driveable area, avoid objects and debris, stay in its lane, and change lanes or turn at intersections when needed. Autonomous vehicles also need LiDAR beams that point high enough to detect tall objects, road signs, overhangs, as well as navigating up or down slopes.
Range. LiDAR range is a topic that creates significant buzz in the auto industry. Autonomous vehicles need to see as far ahead as possible to optimize safety. At highway speeds, a minimum range of 200 meters allows the vehicle the time it needs to react to changing road conditions and surroundings. Slower, non-highway speeds allow for sensors with shorter range, but vehicles still need to react quickly unexpected events on the roadway such as a person focused on a cellphone stepping onto the street from between two cars, an animal crossing the road, an object falling from a truck, and debris ahead in the road. In each of these situations, onboard sensors need sufficient range to give the vehicle adequate time to detect the person or object, classify what it is, determine whether and how it is moving, and then take steps to avoid it while not hitting another car or object.
Another factor connected to range is reflectivity. Reflectivity refers to an object’s propensity to reflect light back to the sensor. Lighter colored objects reflect more light than darker objects. While many sensors are able to detect objects with high reflectivity at long range, far fewer are able to detect low reflectivity objects at range. Velodyne LiDAR’s newest sensors are able to detect low reflectivity objects at the ranges needed to be highway safe.
“FEW SENSORS ARE ABLE TO DETECT LOW REFLECTIVITY OBJECTS AT RANGE. VELODYNE SENSORS CAN.”
Resolution. High-resolution LiDAR is critical for object detection and collision avoidance at all speeds. Finer resolution allows a sensor to more accurately determine the size, shape, and location of objects, with the most advanced LiDAR sensors being able to detect objects within 3 centimeters and some moving closer to 2 centimeters. This finer resolution outperforms even high resolution radar and provides the vehicle with the clearest possible vision of the roadway.
“THIS FINER RESOLUTION OUTPERFORMS EVEN HIGH RESOLUTION RADAR.”
To examine the importance of resolution, we can consider the example of a tire fragment in the road. The LiDAR system needs to be able to not only detect the object but also recognize what it is.This is not an inconsequential task given that it requires detecting a dark object on a dark surface, so a sensor with finer resolution increases the vehicle’s ability to accurately detect and classify the object.
To aid the process of responding to roadway events, unlike cameras, LiDAR provides 3D images of the surroundings with precise measurements of how far away objects are from the vehicle.
LiDAR: Essential to Operating Autonomous Cars Safely
Fully autonomous cars represent a quantum technological leap beyond today’s vehicles. The first entry in this blog series outlined what an autonomous vehicle is and the potential social, economic, and environmental benefits of the autonomous revolution. Next, we described how autonomous vehicles depend on the data provided by their sensors to perceive and navigate the environment. Recognizing
Youtube video of VLS-128 on the freeway.
LiDAR as the essential sensing technology needed for this process, this blog discussed how leading LiDAR solutions can deliver the field of view, range, resolution, and rotation/frame rate that are imperative to operate autonomous cars safely.
How Autonomous Vehicles Work
Part 1: How They Will Improve the Cost, Convenience, and Safety of Driving
Part 2: How Autonomous Vehicles Perceive and Navigate Their Surroundings
Part 3: How LiDAR Technology Enables Autonomous Cars to Operate Safely