Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Do Not Give a Dog Bread Every Time He...

Chongqing Lei (Southeast University), Zhen Ling (Southeast University), Yue Zhang (Jinan University), Kai Dong (Southeast University), Kaizheng Liu (Southeast University), Junzhou Luo (Southeast University), Xinwen Fu (University of Massachusetts Lowell)

Read More

Why do Internet Devices Remain Vulnerable? A Survey with...

Tamara Bondar, Hala Assal, AbdelRahman Abdou (Carleton University)

Read More

CANtropy: Time Series Feature Extraction-Based Intrusion Detection Systems for...

Md Hasan Shahriar, Wenjing Lou, Y. Thomas Hou (Virginia Polytechnic Institute and State University)

Read More

VICEROY: GDPR-/CCPA-compliant Enforcement of Verifiable Accountless Consumer Requests

Scott Jordan (University of California, Irvine), Yoshimichi Nakatsuka (University of California, Irvine), Ercan Ozturk (University of California, Irvine), Andrew Paverd (Microsoft Research), Gene Tsudik (University of California, Irvine)

Read More