Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Real Threshold ECDSA

Harry W. H. Wong (The Chinese University of Hong Kong), Jack P. K. Ma (The Chinese University of Hong Kong), Hoover H. F. Yin (The Chinese University of Hong Kong), Sherman S. M. Chow (The Chinese University of Hong Kong)

Read More

OBSan: An Out-Of-Bound Sanitizer to Harden DNN Executables

Yanzuo Chen (The Hong Kong University of Science and Technology), Yuanyuan Yuan (The Hong Kong University of Science and Technology), Shuai Wang (The Hong Kong University of Science and Technology)

Read More

Cybersecurity of COSPAS-SARSAT and EPIRB: threat and attacker models,...

Andrei Costin, Hannu Turtiainen, Syed Khandkher and Timo Hamalainen (Faculty of Information Technology, University of Jyvaskyla, Finland) Presenter: Andrei Costin

Read More

WIP: Augmenting Vehicle Safety With Passive BLE

Noah T. Curran (University of Michigan), Kang G. Shin (University of Michigan), William Hass (Lear Corporation), Lars Wolleschensky (Lear Corporation), Rekha Singoria (Lear Corporation), Isaac Snellgrove (Lear Corporation), Ran Tao (Lear Corporation)

Read More