Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

(Short) Fooling Perception via Location: A Case of Region-of-Interest...

Kanglan Tang, Junjie Shen, and Qi Alfred Chen (UC Irvine)

Read More

Oblivious DNS over HTTPS (ODoH): A Practical Privacy Enhancement...

Sudheesh Singanamalla*†, Suphanat Chunhapanya*, Jonathan Hoyland*, Marek Vavruša*, Tanya Verma*, Peter Wu*, Marwan Fayed*, Kurtis Heimerl†, Nick Sullivan*, Christopher Wood* (*Cloudflare Inc. †University of Washington)

Read More

Detecting CAN Masquerade Attacks with Signal Clustering Similarity

Pablo Moriano (Oak Ridge National Laboratory), Robert A. Bridges (Oak Ridge National Laboratory) and Michael D. Iannacone (Oak Ridge National Laboratory)

Read More