Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

REDsec: Running Encrypted Discretized Neural Networks in Seconds

Lars Wolfgang Folkerts (University of Delaware), Charles Gouert (University of Delaware), Nektarios Georgios Tsoutsos (University of Delaware)

Read More

Death By A Thousand COTS: Disrupting Satellite Communications using...

Frederick Rawlins, Richard Baker and Ivan Martinovic (University of Oxford) Presenter: Frederick Rawlins

Read More

“I didn't click”: What users say when reporting phishing

Nikolas Pilavakis, Adam Jenkins, Nadin Kokciyan, Kami Vaniea (University of Edinburgh)

Read More

Efficient Privacy-Preserved Processing of Multimodal Data for Vehicular Traffic...

Meisam Mohammady (Iowa State University), Reza Arablouei (Data61, CSIRO)

Read More