Takami Sato (UC Irvine) and Qi Alfred Chen (UC Irvine)

Deep Neural Network (DNN)-based lane detection is widely utilized in autonomous driving technologies. At the same time, recent studies demonstrate that adversarial attacks on lane detection can cause serious consequences on particular production-grade autonomous driving systems. However, the generality of the attacks, especially their effectiveness against other state-of-the-art lane detection approaches, has not been well studied. In this work, we report our progress on conducting the first large-scale empirical study to evaluate the robustness of 4 major types of lane detection methods under 3 types of physical-world adversarial attacks in end-to-end driving scenarios. We find that each lane detection method has different security characteristics, and in particular, some models are highly vulnerable to certain types of attack. Surprisingly, but probably not coincidentally, popular production lane centering systems properly select the lane detection approach which shows higher resistance to such attacks. In the near future, more and more automakers will include autonomous driving features in their products. We hope that our research will help as many automakers as possible to recognize the risks in choosing lane detection algorithms.

View More Papers

Shipping security at scale in the Chrome browser

Adriana Porter Felt (Director of Engineering for Chrome)

Read More

Generation of CAN-based Wheel Lockup Attacks on the Dynamics...

Alireza Mohammadi (University of Michigan-Dearborn), Hafiz Malik (University of Michigan-Dearborn) and Masoud Abbaszadeh (GE Global Research)

Read More

Privacy in Urban Sensing with Instrumented Fleets, Using Air...

Ismi Abidi (IIT Delhi), Ishan Nangia (MPI-SWS), Paarijaat Aditya (Nokia Bell Labs), Rijurekha Sen (IIT Delhi)

Read More

DrawnApart: A Deep-Learning Enhanced GPU Fingerprinting Technique

Naif Mehanna (University of Lille, CNRS, Inria), Tomer Laor (Ben-Gurion University of the Negev)

Read More