Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Demo #5: Disclosing the Pringles Syndrome in Tesla FSD...

Zhisheng Hu (Baidu), Shengjian Guo (Baidu) and Kang Li (Baidu)

Read More

POSEIDON: Privacy-Preserving Federated Neural Network Learning

Sinem Sav (EPFL), Apostolos Pyrgelis (EPFL), Juan Ramón Troncoso-Pastoriza (EPFL), David Froelicher (EPFL), Jean-Philippe Bossuat (EPFL), Joao Sa Sousa (EPFL), Jean-Pierre Hubaux (EPFL)

Read More

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

Xiaoyu Cao (Duke University), Minghong Fang (The Ohio State University), Jia Liu (The Ohio State University), Neil Zhenqiang Gong (Duke University)

Read More

From WHOIS to WHOWAS: A Large-Scale Measurement Study of...

Chaoyi Lu (Tsinghua University; Beijing National Research Center for Information Science and Technology), Baojun Liu (Tsinghua University; Beijing National Research Center for Information Science and Technology; Qi An Xin Group), Yiming Zhang (Tsinghua University; Beijing National Research Center for Information Science and Technology), Zhou Li (University of California, Irvine), Fenglu Zhang (Tsinghua University), Haixin Duan…

Read More