Masashi Fukunaga (MitsubishiElectric), Takeshi Sugawara (The University of Electro-Communications)

Integrity of sensor measurement is crucial for safe and reliable autonomous driving, and researchers are actively studying physical-world injection attacks against light detection and ranging (LiDAR). Conventional work focused on object/obstacle detectors, and its impact on LiDAR-based simultaneous localization and mapping (SLAM) has been an open research problem. Addressing the issue, we evaluate the robustness of a scan-matching SLAM algorithm in the simulation environment based on the attacker capability characterized by indoor and outdoor physical experiments. Our attack is based on Sato et al.’s asynchronous random spoofing attack that penetrates randomization countermeasures in modern LiDARs. The attack is effective with fake points injected behind the victim vehicle and potentially evades detection-based countermeasures working within the range of object detectors. We discover that mapping is susceptible toward the z-axis, the direction perpendicular to the ground, because feature points are scarce either in the sky or on the road. The attack results in significant changes in the map, such as a downhill converted into an uphill. The false map induces errors to the self-position estimation on the x-y plane in each frame, which accumulates over time. In our experiment, after making laser injection for 5 meters (i.e. 1 second), the victim SLAM’s self-position begins and continues to diverge from the reality, resulting in the 5m shift to the right after running 125 meters. The false map and self-position significantly affect the motion planning algorithm, too; the planned trajectory changes by 3◦ with which the victim vehicle will enter the opposite lane after running 35 meters. Finally, we discuss possible mitigations against the proposed attack.

View More Papers

EM Eye: Characterizing Electromagnetic Side-channel Eavesdropping on Embedded Cameras

Yan Long (University of Michigan), Qinhong Jiang (Zhejiang University), Chen Yan (Zhejiang University), Tobias Alam (University of Michigan), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University), Kevin Fu (Northeastern University)

Read More

GraphGuard: Detecting and Counteracting Training Data Misuse in Graph...

Bang Wu (CSIRO's Data61/Monash University), He Zhang (Monash University), Xiangwen Yang (Monash University), Shuo Wang (CSIRO's Data61/Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Shirui Pan (Griffith University), Xingliang Yuan (Monash University)

Read More

A Cross-Verification Approach with Publicly Available Map for Detecting...

Takami Sato, Ningfei Wang (University of California, Irvine), Yueqiang Cheng (NIO Security Research), Qi Alfred Chen (University of California, Irvine)

Read More