Ryunosuke Kobayashi, Kazuki Nomoto, Yuna Tanaka, Go Tsuruoka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

—Object detection is a crucial function that detects the position and type of objects from data acquired by sensors. In autonomous driving systems, object detection is performed using data from cameras and LiDAR, and based on the results, the vehicle is controlled to follow the safest route. However, machine learning-based object detection has been reported to have vulnerabilities to adversarial samples. In this study, we propose a new attack method called “Shadow Hack” for LiDAR object detection models. While previous attack methods mainly added perturbed point clouds to LiDAR data, in this research, we introduce a method to generate “Adversarial Shadows” on the LiDAR point cloud. Specifically, the attacker strategically places materials like aluminum leisure mats to reproduce optimized positions and shapes of shadows on the LiDAR point cloud. This technique can potentially mislead LiDAR-based object detection in autonomous vehicles, leading to congestion and accidents due to actions such as braking and avoidance maneuvers. We reproduce the Shadow Hack attack method using simulations and evaluate the success rate of the attack. Furthermore, by revealing the conditions under which the attack succeeds, we aim to propose countermeasures and contribute to enhancing the robustness of autonomous driving systems.

View More Papers

A Comparison of Three Approaches to Assist Users in...

Michael Clark (Brigham Young University), Scott Ruoti (The University of Tennessee), Michael Mendoza (Imperial College London), Kent Seamons (Brigham Young University)

Read More

On the Vulnerability of Traffic Light Recognition Systems to...

Sri Hrushikesh Varma Bhupathiraju (University of Florida), Takami Sato (University of California, Irvine), Michael Clifford (Toyota Info Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Read More

Short: Certifiably Robust Perception Against Adversarial Patch Attacks: A...

Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

Read More

Compensating Removed Frequency Components: Thwarting Voice Spectrum Reduction Attacks

Shu Wang (George Mason University), Kun Sun (George Mason University), Qi Li (Tsinghua University)

Read More