Takami Sato (University of California Irvine), Sri Hrushikesh Varma Bhupathiraju (University of Florida), Michael Clifford (Toyota InfoTech Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

All vehicles must follow the rules that govern traffic behavior, regardless of whether the vehicles are human-driven or Connected Autonomous Vehicles (CAVs). Road signs indicate locally active rules, such as speed limits and requirements to yield or stop. Recent research has demonstrated attacks, such as adding stickers or projected colored patches to signs, that cause CAV misinterpretation, resulting in potential safety issues. Humans can see and potentially defend against these attacks. But humans can not detect what they can not observe. We have developed an effective physical-world attack that leverages the sensitivity of filterless image sensors and the properties of Infrared Laser Reflections (ILRs), which are invisible to humans. The attack is designed to affect CAV cameras and perception, undermining traffic sign recognition by inducing misclassification. In this work, we formulate the threat model and requirements for an ILR-based traffic sign perception attack to succeed. We evaluate the effectiveness of the ILR attack with real-world experiments against two major traffic sign recognition architectures on four IR-sensitive cameras. Our black-box optimization methodology allows the attack to achieve up to a 100% attack success rate in indoor, static scenarios and a ≥80.5% attack success rate in our outdoor, moving vehicle scenarios. We find the latest state-of-the-art certifiable defense is ineffective against ILR attacks as it mis-certifies ≥33.5% of cases. To address this, we propose a detection strategy based on the physical properties of IR laser reflections which can detect 96% of ILR attacks.

View More Papers

SENSE: Enhancing Microarchitectural Awareness for TEEs via Subscription-Based Notification

Fan Sang (Georgia Institute of Technology), Jaehyuk Lee (Georgia Institute of Technology), Xiaokuan Zhang (George Mason University), Meng Xu (University of Waterloo), Scott Constable (Intel), Yuan Xiao (Intel), Michael Steiner (Intel), Mona Vij (Intel), Taesoo Kim (Georgia Institute of Technology)

Read More

Like, Comment, Get Scammed: Characterizing Comment Scams on Media...

Xigao Li (Stony Brook University), Amir Rahmati (Stony Brook University), Nick Nikiforakis (Stony Brook University)

Read More

WIP: Threat Modeling Laser-Induced Acoustic Interference in Computer Vision-Assisted...

Nina Shamsi (Northeastern University), Kaeshav Chandrasekar, Yan Long, Christopher Limbach (University of Michigan), Keith Rebello (Boeing), Kevin Fu (Northeastern University)

Read More

Maginot Line: Assessing a New Cross-app Threat to PII-as-Factor...

Fannv He (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences, China), Yan Jia (DISSec, College of Cyber Science, Nankai University, China), Jiayu Zhao (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences, China), Yue Fang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences, China),…

Read More