Chen Ma (Xi'an Jiaotong University), Ningfei Wang (University of California, Irvine), Qi Alfred Chen (University of California, Irvine), Chao Shen (Xi'an Jiaotong University)

Recently, adversarial examples against object detection have been widely studied. However, it is difficult for these attacks to have an impact on visual perception in autonomous driving because the complete visual pipeline of real-world autonomous driving systems includes not only object detection but also object tracking. In this paper, we present a novel tracker hijacking attack against the multi-target tracking algorithm employed by real-world autonomous driving systems, which controls the bounding box of object detection to spoof the multiple object tracking process. Our approach exploits the detection box generation process of the anchor-based object detection algorithm and designs new optimization methods to generate adversarial patches that can successfully perform tracker hijacking attacks, causing security risks. The evaluation results show that our approach has 85% attack success rate on two detection models employed by real-world autonomous driving systems. We discuss our potential next step for this work.

View More Papers

QPEP in the Real World: A Testbed for Secure...

Julian Huwyler (ETH Zurich), James Pavur (University of Oxford), Giorgio Tresoldi and Martin Strohmeier (Cyber-Defence Campus) Presenter: Martin Strohmeier

Read More

WIP: An Adaptive High Frequency Removal Attack to Bypass...

Yuki Hayakawa (Keio University), Takami Sato (University of California, Irvine), Ryo Suzuki, Kazuma Ikeda, Ozora Sako, Rokuto Nagata (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

Read More

Towards a Unified Cybersecurity Testing Lab for Satellite, Aerospace,...

Andrei Costin, Hannu Turtiainen, Syed Khandkher and Timo Hamalainen (Faculty of Information Technology, University of Jyvaskyla, Finland) Presenter: Andrei Costin

Read More