Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Privacy-Preserving Database Fingerprinting

Tianxi Ji (Texas Tech University), Erman Ayday (Case Western Reserve University), Emre Yilmaz (University of Houston-Downtown), Ming Li (CSE Department The University of Texas at Arlington), Pan Li (Case Western Reserve University)

Read More

A Transcontinental Analysis of Account Remediation Protocols of Popular...

Philipp Markert (Ruhr University Bochum), Andrick Adhikari (University of Denver), Sanchari Das (University of Denver)

Read More

Brokenwire: Wireless Disruption of CCS Electric Vehicle Charging

Sebastian Köhler (University of Oxford), Richard Baker (University of Oxford), Martin Strohmeier (armasuisse Science + Technology), Ivan Martinovic (University of Oxford)

Read More

OBSan: An Out-Of-Bound Sanitizer to Harden DNN Executables

Yanzuo Chen (The Hong Kong University of Science and Technology), Yuanyuan Yuan (The Hong Kong University of Science and Technology), Shuai Wang (The Hong Kong University of Science and Technology)

Read More