Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

DRIVETRUTH: Automated Autonomous Driving Dataset Generation for Security Applications

Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

Read More

GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural...

Qiao Zhang (Old Dominion University), Chunsheng Xin (Old Dominion University), Hongyi Wu (Old Dominion University)

Read More

More than a Fair Share: Network Data Remanence Attacks...

Leila Rashidi (University of Calgary), Daniel Kostecki (Northeastern University), Alexander James (University of Calgary), Anthony Peterson (Northeastern University), Majid Ghaderi (University of Calgary), Samuel Jero (MIT Lincoln Laboratory), Cristina Nita-Rotaru (Northeastern University), Hamed Okhravi (MIT Lincoln Laboratory), Reihaneh Safavi-Naini (University of Calgary)

Read More

Your Phone is My Proxy: Detecting and Understanding Mobile...

Xianghang Mi (University at Buffalo), Siyuan Tang (Indiana University Bloomington), Zhengyi Li (Indiana University Bloomington), Xiaojing Liao (Indiana University Bloomington), Feng Qian (University of Minnesota Twin Cities), XiaoFeng Wang (Indiana University Bloomington)

Read More