Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

The Abuser Inside Apps: Finding the Culprit Committing Mobile...

Joongyum Kim (KAIST), Jung-hwan Park (KAIST), Sooel Son (KAIST)

Read More

CHANCEL: Efficient Multi-client Isolation Under Adversarial Programs

Adil Ahmad (Purdue University), Juhee Kim (Seoul National University), Jaebaek Seo (Google), Insik Shin (KAIST), Pedro Fonseca (Purdue University), Byoungyoung Lee (Seoul National University)

Read More

Towards Understanding and Detecting Cyberbullying in Real-world Images

Nishant Vishwamitra (University at Buffalo), Hongxin Hu (University at Buffalo), Feng Luo (Clemson University), Long Cheng (Clemson University)

Read More

A Framework for Consistent and Repeatable Controller Area Network...

Paul Agbaje (University of Texas at Arlington), Afia Anjum (University of Texas at Arlington), Arkajyoti Mitra (University of Texas at Arlington), Gedare Bloom (University of Colorado Colorado Springs) and Habeeb Olufowobi (University of Texas at Arlington)

Read More