Katherine S. Zhang (Purdue University), Claire Chen (Pennsylvania State University), Aiping Xiong (Pennsylvania State University)

Artificial intelligence (AI) systems in autonomous driving are vulnerable to a number of attacks, particularly the physical-world attacks that tamper with physical objects in the driving environment to cause AI errors. When AI systems fail or are about to fail, human drivers are required to take over vehicle control. To understand such human and AI collaboration, in this work, we examine 1) whether human drivers can detect these attacks, 2) how they project the consequent autonomous driving, 3) and what information they expect for safely taking over the vehicle control. We conducted an online survey on Prolific. Participants (N = 100) viewed benign and adversarial images of two physical-world attacks. We also presented videos of simulated driving for both attacks. Our results show that participants did not seem to be aware of the attacks. They overestimated the AI’s ability to detect the object in the dirty-road attack than in the stop-sign attack. Such overestimation was also evident when participants predicted AI’s ability in autonomous driving. We also found that participants expected different information (e.g., warnings and AI explanations) for safely taking over the control of autonomous driving.

View More Papers

VulHawk: Cross-architecture Vulnerability Detection with Entropy-based Binary Code Search

Zhenhao Luo (College of Computer, National University of Defense Technology), Pengfei Wang (College of Computer, National University of Defense Technology), Baosheng Wang (College of Computer, National University of Defense Technology), Yong Tang (College of Computer, National University of Defense Technology), Wei Xie (College of Computer, National University of Defense Technology), Xu Zhou (College of Computer,…

Read More

CHKPLUG: Checking GDPR Compliance of WordPress Plugins via Cross-language...

Faysal Hossain Shezan (University of Virginia), Zihao Su (University of Virginia), Mingqing Kang (Johns Hopkins University), Nicholas Phair (University of Virginia), Patrick William Thomas (University of Virginia), Michelangelo van Dam (in2it), Yinzhi Cao (Johns Hopkins University), Yuan Tian (UCLA)

Read More

Measuring Messengers: Analyzing Infrastructures and Message Timings to Extract...

Theodor Schnitzler (Research Center Trustworthy Data Science and Security, TU Dortmund, and Ruhr-Universität Bochum)

Read More

Machine Unlearning of Features and Labels

Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Read More