Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Raising Trust in the Food Supply Chain

Alexander Krumpholz, Marthie Grobler, Raj Gaire, Claire Mason, Shanae Burns (CSIRO Data61)

Read More

HERA: Hotpatching of Embedded Real-time Applications

Christian Niesler (University of Duisburg-Essen), Sebastian Surminski (University of Duisburg-Essen), Lucas Davi (University of Duisburg-Essen)

Read More

Generating 3D Adversarial Point Clouds under the Principle of...

Bo Yang (Zhejiang University), Yushi Cheng (Tsinghua University), Zizhi Jin (Zhejiang University), Xiaoyu Ji (Zhejiang University) and Wenyuan Xu (Zhejiang University)

Read More

Physical Layer Data Manipulation Attacks on the CAN Bus

Abdullah Zubair Mohammed (Virginia Tech), Yanmao Man (University of Arizona), Ryan Gerdes (Virginia Tech), Ming Li (University of Arizona) and Z. Berkay Celik (Purdue University)

Read More