Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

With emerging vision-based autonomous driving (AD) systems, it becomes increasingly important to have datasets to evaluate their correct operation and identify potential security flaws. However, when collecting a large amount of data, either human experts manually label potentially hundreds of thousands of image frames or systems use machine learning algorithms to label the data, with the hope that the accuracy is good enough for the application. This can become especially problematic when tracking the context information, such as the location and velocity of surrounding objects, useful to evaluate the correctness and improve stability and robustness of the AD systems.

View More Papers

Denial-of-Service Attacks on C-V2X Networks

Natasa Trkulja, David Starobinski (Boston University), and Randall Berry (Northwestern University)

Read More

Demo #6: Impact of Stealthy Attacks on Autonomous Robotic...

Pritam Dash, Mehdi Karimibiuki, and Karthik Pattabiraman (University of British Columbia)

Read More

FirmWire: Transparent Dynamic Analysis for Cellular Baseband Firmware

Grant Hernandez (University of Florida), Marius Muench (Vrije Universiteit Amsterdam), Dominik Maier (TU Berlin), Alyssa Milburn (Vrije Universiteit Amsterdam), Shinjo Park (TU Berlin), Tobias Scharnowski (Ruhr-University Bochum), Tyler Tucker (University of Florida), Patrick Traynor (University of Florida), Kevin Butler (University of Florida)

Read More