Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Demo #15: Remote Adversarial Attack on Automated Lane Centering

Yulong Cao (University of Michigan), Yanan Guo (University of Pittsburgh), Takami Sato (UC Irvine), Qi Alfred Chen (UC Irvine), Z. Morley Mao (University of Michigan) and Yueqiang Cheng (NIO)

Read More

WIP: Interrupt Attack on TEE-protected Robotic Vehicles

Mulong Luo (Cornell University) and G. Edward Suh (Cornell University)

Read More

To Err.Is Human: Characterizing the Threat of Unintended URLs...

Beliz Kaleli (Boston University), Brian Kondracki (Stony Brook University), Manuel Egele (Boston University), Nick Nikiforakis (Stony Brook University), Gianluca Stringhini (Boston University)

Read More