Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Cyber Threat Intelligence for SOC Analysts

Nidhi Rastogi, Md Tanvirul Alam (Rochester Institute of Technology)

Read More

Do Not Give a Dog Bread Every Time He...

Chongqing Lei (Southeast University), Zhen Ling (Southeast University), Yue Zhang (Jinan University), Kai Dong (Southeast University), Kaizheng Liu (Southeast University), Junzhou Luo (Southeast University), Xinwen Fu (University of Massachusetts Lowell)

Read More

EdgeTDC: On the Security of Time Difference of Arrival...

Marc Roeschlin (ETH Zurich, Switzerland), Giovanni Camurati (ETH Zurich, Switzerland), Pascal Brunner (ETH Zurich, Switzerland), Mridula Singh (CISPA Helmholtz Center for Information Security), Srdjan Capkun (ETH Zurich, Switzerland)

Read More

ReScan: A Middleware Framework for Realistic and Robust Black-box...

Kostas Drakonakis (FORTH), Sotiris Ioannidis (Technical University of Crete), Jason Polakis (University of Illinois at Chicago)

Read More