Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

With emerging vision-based autonomous driving (AD) systems, it becomes increasingly important to have datasets to evaluate their correct operation and identify potential security flaws. However, when collecting a large amount of data, either human experts manually label potentially hundreds of thousands of image frames or systems use machine learning algorithms to label the data, with the hope that the accuracy is good enough for the application. This can become especially problematic when tracking the context information, such as the location and velocity of surrounding objects, useful to evaluate the correctness and improve stability and robustness of the AD systems.

View More Papers

Demo #8: Security of Camera-based Perception for Autonomous Driving...

Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Read More

(Short) WIP: End-to-End Analysis of Adversarial Attacks to Automated...

Hengyi Liang, Ruochen Jiao (Northwestern University), Takami Sato, Junjie Shen, Qi Alfred Chen (UC Irvine), and Qi Zhu (Northwestern University) Best Short Paper Award Winner!

Read More

Demo #6: Attacks on CAN Error Handling Mechanism

Khaled Serag (Purdue University), Vireshwar Kumar (IIT Delhi), Z. Berkay Celik (Purdue University), Rohit Bhatia (Purdue University), Mathias Payer (EPFL) and Dongyan Xu (Purdue University)

Read More

Euler: Detecting Network Lateral Movement via Scalable Temporal Graph...

Isaiah J. King (The George Washington University), H. Howie Huang (The George Washington University)

Read More