Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

WIP: Adversarial Retroreflective Patches: A Novel Stealthy Attack on...

Go Tsuruoka (Waseda University), Takami Sato, Qi Alfred Chen (University of California, Irvine), Kazuki Nomoto, Ryunosuke Kobayashi, Yuna Tanaka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Read More

Applying Accessibility Metrics to Measure the Threat Landscape for...

John Breton, AbdelRahman Abdou (Carleton University)

Read More

Enhanced Vehicular Roll-Jam Attack using a Known Noise Source

Zachary Depp, Halit Bugra Tulay, C. Emre Koksal (The Ohio State University)

Read More

Evaluations of Cyberattacks on Cooperative Control of Connected and...

H M Sabbir Ahmad (Boston University), Ehsan Sabouni (Boston University), Wei Xiao (Massachusetts Institute of Technology), Christos G. Cassandras (Boston University), Wenchao Li (Boston University)

Read More