Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

MUVIDS: False MAVLink Injection Attack Detection in Communication for...

Seonghoon Jeong, Eunji Park, Kang Uk Seo, Jeong Do Yoo, and Huy Kang Kim (Korea University)

Read More

Mondrian: Comprehensive Inter-domain Network Zoning Architecture

Jonghoon Kwon (ETH Zürich), Claude Hähni (ETH Zürich), Patrick Bamert (Zürcher Kantonalbank), Adrian Perrig (ETH Zürich)

Read More

Demo #10: Security of Deep Learning based Automated Lane...

Takami Sato, Junjie Shen, Ningfei Wang (UC Irvine), Yunhan Jia (ByteDance), Xue Lin (Northeastern University), and Qi Alfred Chen (UC Irvine)

Read More

Model-Agnostic Defense for Lane Detection against Adversarial Attack

Henry Xu, An Ju, and David Wagner (UC Berkeley) Baidu Security Auto-Driving Security Award Winner ($1000 cash prize)!

Read More