Henry Xu, An Ju, and David Wagner (UC Berkeley)

Baidu Security Auto-Driving Security Award Winner ($1000 cash
prize)!

Susceptibility of neural networks to adversarial attack prompts serious safety concerns for lane detection efforts, a domain where such models have been widely applied. Recent work on adversarial road patches have successfully induced perception of lane lines with arbitrary form, presenting an avenue for rogue control of vehicle behavior. In this paper, we propose a modular lane verification system that can catch such threats before the autonomous driving system is misled while remaining agnostic to the particular lane detection model. Our experiments show that implementing the system with a simple convolutional neural network (CNN) can defend against a wide gamut of attacks on lane detection models. With a 10% impact to inference time, we can detect 96% of bounded non-adaptive attacks, 90% of bounded adaptive attacks, and 98% of patch attacks while preserving accurate identification at least 95% of true lanes, indicating that our proposed verification system is effective at mitigating lane detection security risks with minimal overhead.

View More Papers

Screen Gleaning: Receiving and Interpreting Pixels by Eavesdropping on...

Zhuoran Liu, Léo Weissbart, Dirk Lauret (Radboud University)

Read More

Evading Voltage-Based Intrusion Detection on Automotive CAN

Rohit Bhatia (Purdue University), Vireshwar Kumar (Indian Institute of Technology Delhi), Khaled Serag (Purdue University), Z. Berkay Celik (Purdue University), Mathias Payer (EPFL), Dongyan Xu (Purdue University)

Read More

CANCloak: Deceiving Two ECUs with One Frame

Li Yue, Zheming Li, Tingting Yin, and Chao Zhang (Tsinghua University)

Read More

Demo #10: Hijacking Connected Vehicle Alexa Skills

Wenbo Ding (University at Buffalo), Long Cheng (Clemson University), Xianghang Mi (University of Science and Technology of China), Ziming Zhao (University at Buffalo) and Hongxin Hu (University at Buffalo)

Read More