Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Demo #9: Dynamic Time Warping as a Tool for...

Mars Rayno (Colorado State University) and Jeremy Daily (Colorado State University)

Read More

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

Xiaoyu Cao (Duke University), Minghong Fang (The Ohio State University), Jia Liu (The Ohio State University), Neil Zhenqiang Gong (Duke University)

Read More

MUVIDS: False MAVLink Injection Attack Detection in Communication for...

Seonghoon Jeong, Eunji Park, Kang Uk Seo, Jeong Do Yoo, and Huy Kang Kim (Korea University)

Read More

Towards Measuring Supply Chain Attacks on Package Managers for...

Ruian Duan (Georgia Institute of Technology), Omar Alrawi (Georgia Institute of Technology), Ranjita Pai Kasturi (Georgia Institute of Technology), Ryan Elder (Georgia Institute of Technology), Brendan Saltaformaggio (Georgia Institute of Technology), Wenke Lee (Georgia Institute of Technology)

Read More