Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Taking a Closer Look at the Alexa Skill Ecosystem

Christopher Lentzsch (Ruhr-Universität Bochum), Anupam Das (North Carolina State University)

Read More

LaKSA: A Probabilistic Proof-of-Stake Protocol

Daniel Reijsbergen (Singapore University of Technology and Design), Pawel Szalachowski (Singapore University of Technology and Design), Junming Ke (University of Tartu), Zengpeng Li (Singapore University of Technology and Design), Jianying Zhou (Singapore University of Technology and Design)

Read More

A Framework for Consistent and Repeatable Controller Area Network...

Paul Agbaje (University of Texas at Arlington), Afia Anjum (University of Texas at Arlington), Arkajyoti Mitra (University of Texas at Arlington), Gedare Bloom (University of Colorado Colorado Springs) and Habeeb Olufowobi (University of Texas at Arlington)

Read More

ROV++: Improved Deployable Defense against BGP Hijacking

Reynaldo Morillo (University of Connecticut), Justin Furuness (University of Connecticut), Cameron Morris (University of Connecticut), James Breslin (University of Connecticut), Amir Herzberg (University of Connecticut), Bing Wang (University of Connecticut)

Read More