Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Awakening the Web's Sleeper Agents: Misusing Service Workers for...

Soroush Karami (University of Illinois at Chicago), Panagiotis Ilia (University of Illinois at Chicago), Jason Polakis (University of Illinois at Chicago)

Read More

“Lose Your Phone, Lose Your Identity”: Exploring Users’ Perceptions...

Michael Lutaaya, Hala Assal, Khadija Baig, Sana Maqsood, Sonia Chiasson (Carleton University)

Read More

OblivSketch: Oblivious Network Measurement as a Cloud Service

Shangqi Lai (Monash University), Xingliang Yuan (Monash University), Joseph K. Liu (Monash University), Xun Yi (RMIT University), Qi Li (Tsinghua University), Dongxi Liu (Data61, CSIRO), Surya Nepal (Data61, CSIRO)

Read More

Denial-of-Service Attacks on C-V2X Networks

Natasa Trkulja, David Starobinski (Boston University), and Randall Berry (Northwestern University)

Read More