Vehicles using artificial neural networks to function do not have past memory no matter how many times they have been through that route.
Researchers from Cornell University have created a way to help autonomous vehicles create “memories” of previous experiences and use them in future navigation, especially during harsh weather conditions when the vehicles cannot safely trust their sensors.
The team was headed by a doctoral student Carlos Diaz-Ruiz. The group collected a dataset by driving a car equipped with LiDAR (Light Detection and Ranging) sensors along a 9.3-mile (15-kilometer) loop in and around Ithaca 40 times over 18 months. The traversals capture varying, weather conditions (sunny, rainy, snowy), and times of the day. As a result, 600,000 scenes were gathered.
“It deliberately exposes one of the key challenges in self-driving cars: poor weather conditions,” said Diaz-Ruiz. “If the street is covered by snow, humans can rely on memories, but without memories, a neural network is heavily disadvantaged.”
Two of the papers were presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022), held June 19-24 in New Orleans.
HINDSIGHT is an approach that utilizes neural networks to compute descriptors of objects as the car passes them. It then compresses these descriptions, which the group has named SQuaSH?(Spatial-Quantized Sparse History) features, and stores them on a virtual map, like a “memory” stored in a human brain.
This means that vehicles will “remember” what they had seen last time. The database is continuously updated and shared across vehicles.
“This information can be added as features to any LiDAR-based 3D object detector,” You said. “Both the detector and the SQuaSH representation can be trained jointly without any additional supervision, or human annotation, which is time- and labor-intensive.”
The researchers hope that this reduces the development cost of autonomous vehicles and makes such vehicles more efficient by learning to navigate the locations in which they are used the most.
“In reality, you rarely drive a route for the very first time,” said co-author Katie Luo, a doctoral student in the research group. “Either you yourself or someone else has driven it before recently, so it seems only natural to collect that experience and utilize it.”
“The fundamental question is, can we learn from repeated traversals?” said senior author Kilian Weinberger, professor of computer science. “For example, a car may mistake a weirdly shaped tree for a pedestrian the first time its laser scanner perceives it from a distance, but once it is close enough, the object category will become clear. So, the second time you drive past the very same tree, even in fog or snow, you would hope that the car has now learned to recognize it correctly.”