Katherine S. Zhang (Purdue University), Claire Chen (Pennsylvania State University), Aiping Xiong (Pennsylvania State University)

Artificial intelligence (AI) systems in autonomous driving are vulnerable to a number of attacks, particularly the physical-world attacks that tamper with physical objects in the driving environment to cause AI errors. When AI systems fail or are about to fail, human drivers are required to take over vehicle control. To understand such human and AI collaboration, in this work, we examine 1) whether human drivers can detect these attacks, 2) how they project the consequent autonomous driving, 3) and what information they expect for safely taking over the vehicle control. We conducted an online survey on Prolific. Participants (N = 100) viewed benign and adversarial images of two physical-world attacks. We also presented videos of simulated driving for both attacks. Our results show that participants did not seem to be aware of the attacks. They overestimated the AI’s ability to detect the object in the dirty-road attack than in the stop-sign attack. Such overestimation was also evident when participants predicted AI’s ability in autonomous driving. We also found that participants expected different information (e.g., warnings and AI explanations) for safely taking over the control of autonomous driving.

View More Papers

DOITRUST: Dissecting On-chain Compromised Internet Domains via Graph Learning

Shuo Wang (CSIRO's Data61 & Cybersecurity CRC, Australia), Mahathir Almashor (CSIRO's Data61 & Cybersecurity CRC, Australia), Alsharif Abuadbba (CSIRO's Data61 & Cybersecurity CRC, Australia), Ruoxi Sun (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Calvin Wang (CSIRO's Data61), Raj Gaire (CSIRO's Data61 & Cybersecurity CRC, Australia), Surya Nepal (CSIRO's Data61 & Cybersecurity CRC, Australia), Seyit Camtepe (CSIRO's…

Read More

CHKPLUG: Checking GDPR Compliance of WordPress Plugins via Cross-language...

Faysal Hossain Shezan (University of Virginia), Zihao Su (University of Virginia), Mingqing Kang (Johns Hopkins University), Nicholas Phair (University of Virginia), Patrick William Thomas (University of Virginia), Michelangelo van Dam (in2it), Yinzhi Cao (Johns Hopkins University), Yuan Tian (UCLA)

Read More

Short: Certifiably Robust Perception Against Adversarial Patch Attacks: A...

Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

Read More

BinaryInferno: A Semantic-Driven Approach to Field Inference for Binary...

Jared Chandler (Tufts University), Adam Wick (Fastly), Kathleen Fisher (DARPA)

Read More