December 25, 2024
Lasers Can "Hack" Self-Driving LiDAR Sensors, Creating False "Blind Spots", New Study Reveals

In what is likely going to be another thorn in the side of Elon Musk, Tesla and Autopilot, it was revealed in a report last week that many self-driving features in vehicles can be "messed with" using lasers. 

A brand new study that was put together and uploaded in late October, called "You Can't See Me: Physical Removal Attacks on LiDAR-based Autonomous Vehicles Driving Frameworks" made the revelation, which was also reported on by Cosmos Magazine. 

Researchers in the U.S. and Japan found that vehicles could be tricked into not seeing pedestrians (or other objects in their way) using lasers. These cars, which use LiDAR to sense objects around them, send out laser lights and then use the reflection back to judge how far away objects are. 

The study revealed that a perfectly timed laser shone back into a LiDAR system can create "a blind spot large enough to hide an object like a pedestrian," according to Cosmos. 

The study's abstract says: "While existing attacks on LiDAR-based autonomous driving architectures focus on lowering the confidence score of AV object detection models to induce obstacle misdetection, our research discovers how to leverage laser-based spoofing techniques to selectively remove the LiDAR point cloud data of genuine obstacles at the sensor level before being used as input to the AV perception. The ablation of this critical LiDAR information causes autonomous driving obstacle detectors to fail to identify and locate obstacles and, consequently, induces AVs to make dangerous automatic driving decisions"

University of Florida cyber security researcher professor Sara Rampazzi commented: “We mimic the LIDAR reflections with our laser to make the sensor discount other reflections that are coming in from genuine obstacles.”

"The LIDAR is still receiving genuine data from the obstacle, but the data are automatically discarded because our fake reflections are the only one perceived by the sensor," she continued. 

Any laser used in this manner would not only have to be perfectly timed, but would have to move with the vehicle, the report says. 

University of Michigan computer scientist Yulong Cao, co-author of the report, said: “Revealing this liability allows us to build a more reliable system. In our paper, we demonstrate that previous defence strategies aren’t enough, and we propose modifications that should address this weakness.”

The research will be presented at the 2023 USENIX Security Symposium, the report says. 

Tyler Durden Tue, 11/08/2022 - 22:25

In what is likely going to be another thorn in the side of Elon Musk, Tesla and Autopilot, it was revealed in a report last week that many self-driving features in vehicles can be “messed with” using lasers. 

A brand new study that was put together and uploaded in late October, called “You Can’t See Me: Physical Removal Attacks on LiDAR-based Autonomous Vehicles Driving Frameworks” made the revelation, which was also reported on by Cosmos Magazine. 

Researchers in the U.S. and Japan found that vehicles could be tricked into not seeing pedestrians (or other objects in their way) using lasers. These cars, which use LiDAR to sense objects around them, send out laser lights and then use the reflection back to judge how far away objects are. 

The study revealed that a perfectly timed laser shone back into a LiDAR system can create “a blind spot large enough to hide an object like a pedestrian,” according to Cosmos. 

The study’s abstract says: “While existing attacks on LiDAR-based autonomous driving architectures focus on lowering the confidence score of AV object detection models to induce obstacle misdetection, our research discovers how to leverage laser-based spoofing techniques to selectively remove the LiDAR point cloud data of genuine obstacles at the sensor level before being used as input to the AV perception. The ablation of this critical LiDAR information causes autonomous driving obstacle detectors to fail to identify and locate obstacles and, consequently, induces AVs to make dangerous automatic driving decisions”

University of Florida cyber security researcher professor Sara Rampazzi commented: “We mimic the LIDAR reflections with our laser to make the sensor discount other reflections that are coming in from genuine obstacles.”

“The LIDAR is still receiving genuine data from the obstacle, but the data are automatically discarded because our fake reflections are the only one perceived by the sensor,” she continued. 

Any laser used in this manner would not only have to be perfectly timed, but would have to move with the vehicle, the report says. 

University of Michigan computer scientist Yulong Cao, co-author of the report, said: “Revealing this liability allows us to build a more reliable system. In our paper, we demonstrate that previous defence strategies aren’t enough, and we propose modifications that should address this weakness.”

The research will be presented at the 2023 USENIX Security Symposium, the report says.