Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Ethical Challenges in Blockchain Network Measurement Research

Yuzhe Tang (Syracuse University), Kai Li (San Diego State University), and Yibo Wang and Jiaqi Chen (Syracuse University)

Read More

Unlocking the Potential of Domain Aware Binary Analysis in...

Dr. Zhiqiang Lin (Distinguished Professor of Engineering at The Ohio State University)

Read More

Why do Internet Devices Remain Vulnerable? A Survey with...

Tamara Bondar, Hala Assal, AbdelRahman Abdou (Carleton University)

Read More

Investigating User Behaviour Towards Fake News on Social Media...

Yasmeen Abdrabou (University of the Bundeswehr Munich), Elisaveta Karypidou (LMU Munich), Florian Alt (University of the Bundeswehr Munich), Mariam Hassib (University of the Bundeswehr Munich)

Read More