Masashi Fukunaga (MitsubishiElectric), Takeshi Sugawara (The University of Electro-Communications)

Integrity of sensor measurement is crucial for safe and reliable autonomous driving, and researchers are actively studying physical-world injection attacks against light detection and ranging (LiDAR). Conventional work focused on object/obstacle detectors, and its impact on LiDAR-based simultaneous localization and mapping (SLAM) has been an open research problem. Addressing the issue, we evaluate the robustness of a scan-matching SLAM algorithm in the simulation environment based on the attacker capability characterized by indoor and outdoor physical experiments. Our attack is based on Sato et al.’s asynchronous random spoofing attack that penetrates randomization countermeasures in modern LiDARs. The attack is effective with fake points injected behind the victim vehicle and potentially evades detection-based countermeasures working within the range of object detectors. We discover that mapping is susceptible toward the z-axis, the direction perpendicular to the ground, because feature points are scarce either in the sky or on the road. The attack results in significant changes in the map, such as a downhill converted into an uphill. The false map induces errors to the self-position estimation on the x-y plane in each frame, which accumulates over time. In our experiment, after making laser injection for 5 meters (i.e. 1 second), the victim SLAM’s self-position begins and continues to diverge from the reality, resulting in the 5m shift to the right after running 125 meters. The false map and self-position significantly affect the motion planning algorithm, too; the planned trajectory changes by 3◦ with which the victim vehicle will enter the opposite lane after running 35 meters. Finally, we discuss possible mitigations against the proposed attack.

View More Papers

WIP: Practical Removal Attacks on LiDAR-based Object Detection in...

Takami Sato (University of California, Irvine), Yuki Hayakawa (Keio University), Ryo Suzuki (Keio University), Yohsuke Shiiki (Keio University), Kentaro Yoshioka (Keio University), Qi Alfred Chen (University of California, Irvine)

Read More

ORL-AUDITOR: Dataset Auditing in Offline Deep Reinforcement Learning

Linkang Du (Zhejiang University), Min Chen (CISPA Helmholtz Center for Information Security), Mingyang Sun (Zhejiang University), Shouling Ji (Zhejiang University), Peng Cheng (Zhejiang University), Jiming Chen (Zhejiang University), Zhikun Zhang (CISPA Helmholtz Center for Information Security and Stanford University)

Read More

Group-based Robustness: A General Framework for Customized Robustness in...

Weiran Lin (Carnegie Mellon University), Keane Lucas (Carnegie Mellon University), Neo Eyal (Tel Aviv University), Lujo Bauer (Carnegie Mellon University), Michael K. Reiter (Duke University), Mahmood Sharif (Tel Aviv University)

Read More

On the Security of Satellite-Based Air Traffic Control

Tobias Lüscher (ETH Zurich), Martin Strohmeier (Cyber-Defence Campus, armasuisse S+T), Vincent Lenders (Cyber-Defence Campus, armasuisse S+T)

Read More