Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Raising Trust in the Food Supply Chain

Alexander Krumpholz, Marthie Grobler, Raj Gaire, Claire Mason, Shanae Burns (CSIRO Data61)

Read More

Effects of Precise and Imprecise Value-Set Analysis (VSA) Information...

Laura Matzen, Michelle A Leger, Geoffrey Reedy (Sandia National Laboratories)

Read More

POP and PUSH: Demystifying and Defending against (Mach) Port-oriented...

Min Zheng (Orion Security Lab, Alibaba Group), Xiaolong Bai (Orion Security Lab, Alibaba Group), Yajin Zhou (Zhejiang University), Chao Zhang (Institute for Network Science and Cyberspace, Tsinghua University), Fuping Qu (Orion Security Lab, Alibaba Group)

Read More

Short Paper: Declarative Demand-Driven Reverse Engineering

Yihao Sun, Jeffrey Ching, Kristopher Micinski (Department of Electical Engineering and Computer Science, Syracuse University)

Read More