Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Analysing Adversarial Threats to Rule-Based Local-Planning Algorithms for Autonomous...

Andrew Roberts (Tallinn University of Technology), Mohsen Malayjerdi (Tallinn University of Technology), Mauro Bellone (Tallinn University of Technology), Olaf Maennel (The University of Adelaide), Ehsan Malayjerdi (Tallinn University of Technology)

Read More

Faster Secure Comparisons with Offline Phase for Efficient Private...

Florian Kerschbaum (University of Waterloo), Erik-Oliver Blass (Airbus), Rasoul Akhavan Mahdavi (University of Waterloo)

Read More

The Power of Bamboo: On the Post-Compromise Security for...

Tianyang Chen (Huazhong University of Science and Technology), Peng Xu (Huazhong University of Science and Technology), Stjepan Picek (Radboud University), Bo Luo (The University of Kansas), Willy Susilo (University of Wollongong), Hai Jin (Huazhong University of Science and Technology), Kaitai Liang (TU Delft)

Read More

Operationalizing Cybersecurity Research Ethics Review: From Principles and Guidelines...

Dennis Reidsma, Jeroen van der Ham, and Andrea Continella (University of Twente)

Read More