Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

REDsec: Running Encrypted Discretized Neural Networks in Seconds

Lars Wolfgang Folkerts (University of Delaware), Charles Gouert (University of Delaware), Nektarios Georgios Tsoutsos (University of Delaware)

Read More

Evaluations of Cyberattacks on Cooperative Control of Connected and...

H M Sabbir Ahmad (Boston University), Ehsan Sabouni (Boston University), Wei Xiao (Massachusetts Institute of Technology), Christos G. Cassandras (Boston University), Wenchao Li (Boston University)

Read More

GPS Spoofing Attack Detection on Intersection Movement Assist using...

Jun Ying (Purdue University), Yiheng Feng (Purdue University), Qi Alfred Chen (University of California, Irvine), Z. Morley Mao (University of Michigan)

Read More

Lightning Community Shout-Outs to:

(1) Jonathan Petit, Secure ML Performance Benchmark (Qualcomm) (2) David Balenson, The Road to Future Automotive Research Datasets: PIVOT Project and Community Workshop (USC Information Sciences Institute) (3) Jeremy Daily, CyberX Challenge Events (Colorado State University) (4) Mert D. Pesé, DETROIT: Data Collection, Translation and Sharing for Rapid Vehicular App Development (Clemson University) (5) Ning…

Read More