Takami Sato (University of California Irvine), Sri Hrushikesh Varma Bhupathiraju (University of Florida), Michael Clifford (Toyota InfoTech Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

All vehicles must follow the rules that govern traffic behavior, regardless of whether the vehicles are human-driven or Connected Autonomous Vehicles (CAVs). Road signs indicate locally active rules, such as speed limits and requirements to yield or stop. Recent research has demonstrated attacks, such as adding stickers or projected colored patches to signs, that cause CAV misinterpretation, resulting in potential safety issues. Humans can see and potentially defend against these attacks. But humans can not detect what they can not observe. We have developed an effective physical-world attack that leverages the sensitivity of filterless image sensors and the properties of Infrared Laser Reflections (ILRs), which are invisible to humans. The attack is designed to affect CAV cameras and perception, undermining traffic sign recognition by inducing misclassification. In this work, we formulate the threat model and requirements for an ILR-based traffic sign perception attack to succeed. We evaluate the effectiveness of the ILR attack with real-world experiments against two major traffic sign recognition architectures on four IR-sensitive cameras. Our black-box optimization methodology allows the attack to achieve up to a 100% attack success rate in indoor, static scenarios and a ≥80.5% attack success rate in our outdoor, moving vehicle scenarios. We find the latest state-of-the-art certifiable defense is ineffective against ILR attacks as it mis-certifies ≥33.5% of cases. To address this, we propose a detection strategy based on the physical properties of IR laser reflections which can detect 96% of ILR attacks.

View More Papers

The Advantages of Distributed TCAM Firewalls in Automotive Real-Time...

Evan Allen (Virginia Tech), Zeb Bowden (Virginia Tech Transportation Institute), J. Scot Ransbottom (Virginia Tech)

Read More

Exploring the Influence of Prompts in LLMs for Security-Related...

Weiheng Bai (University of Minnesota), Qiushi Wu (IBM Research), Kefu Wu, Kangjie Lu (University of Minnesota)

Read More

An Experimental Study on Attacking Homogeneous Averaging Processes via...

Olsan Ozbay (Dept. ECE, University of Maryland), Yuntao Liu (ISR, University of Maryland), Ankur Srivastava (Dept. ECE, ISR, University of Maryland)

Read More

DynPRE: Protocol Reverse Engineering via Dynamic Inference

Zhengxiong Luo (Tsinghua University), Kai Liang (Central South University), Yanyang Zhao (Tsinghua University), Feifan Wu (Tsinghua University), Junze Yu (Tsinghua University), Heyuan Shi (Central South University), Yu Jiang (Tsinghua University)

Read More