Go Tsuruoka (Waseda University), Takami Sato, Qi Alfred Chen (University of California, Irvine), Kazuki Nomoto, Ryunosuke Kobayashi, Yuna Tanaka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Traffic signs, essential for communicating critical rules to ensure safe and efficient traffic for entities such as pedestrians and motor vehicles, must be reliably recognized, especially in the realm of autonomous driving. However, recent studies have revealed vulnerabilities in vision-based traffic sign recognition systems to adversarial attacks, typically involving small stickers or laser projections. Our work advances this frontier by exploring a novel attack vector, the Adversarial Retroreflective Patch (ARP) attack. This method is stealthy and particularly effective at night by exploiting the optical properties of retroreflective materials, which reflect light back to its source. By applying retroreflective patches to traffic signs, the reflected light from the vehicle’s headlights interferes with the camera, causing perturbations that hinder the traffic sign recognition model’s ability to correctly detect the signs. In our preliminary study, we conducted a feasibility study of ARP attacks and observed that while a 100% attack success rate is achievable in digital simulations, it decreases to less than or equal to 90% in physical experiments. Finally, we discuss the current challenges and outline our future plans. This research gains significance in the context of autonomous vehicles’ 24/7 operation, emphasizing the critical need to assess sensor and AI vulnerabilities, especially in low-light nighttime environments, to ensure the continued safety and reliability of self-driving technologies.

View More Papers

Eavesdropping on Controller Acoustic Emanation for Keystroke Inference Attack...

Shiqing Luo (George Mason University), Anh Nguyen (George Mason University), Hafsa Farooq (Georgia State University), Kun Sun (George Mason University), Zhisheng Yan (George Mason University)

Read More

Work-in-Progress: A Large-Scale Long-term Analysis of Online Fraud across...

Yi Han, Shujiang Wu, Mengmeng Li, Zixi Wang, and Pengfei Sun (F5)

Read More

Stacking up the LLM Risks: Applied Machine Learning Security

Dr. Gary McGraw, Berryville Institute of Machine Learning

Read More

REPLICAWATCHER: Training-less Anomaly Detection in Containerized Microservices

Asbat El Khairi (University of Twente), Marco Caselli (Siemens AG), Andreas Peter (University of Oldenburg), Andrea Continella (University of Twente)

Read More