Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

WIP: Infrastructure-Aided Defense for Autonomous Driving Systems: Opportunities and...

Yunpeng Luo (UC Irvine), Ningfei Wang (UC Irvine), Bo Yu (PerceptIn), Shaoshan Liu (PerceptIn) and Qi Alfred Chen (UC Irvine)

Read More

PyPANDA: Taming the PANDAmonium of Whole System Dynamic Analysis

Luke Craig, Tim Leek (MIT Lincoln Laboratory), Andrew Fasano, Tiemoko Ballo (MIT Lincoln Laboratory, Northeastern University), Brendan Dolan-Gavitt (New York University), William Robertson (Northeastern University)

Read More

Model-Agnostic Defense for Lane Detection against Adversarial Attack

Henry Xu, An Ju, and David Wagner (UC Berkeley) Baidu Security Auto-Driving Security Award Winner ($1000 cash prize)!

Read More

Scenario-Driven Assessment of Cyber Risk Perception at the Security...

Simon Parkin (TU Delft), Kristen Kuhn, Siraj Ahmed Shaikh (Coventry University)

Read More