Monday, 26 February

  • 07:30 - 09:00
    Breakfast
    Boardroom with Foyer
  • 09:00 - 09:30
    Opening Remarks and Awards
    Kon Tiki Ballroom
  • 09:30 - 10:00
    Session 1A: Autonomous Vehicle Security – 1
    Chair: Yiheng Feng (Purdue University)
    Kon Tiki Ballroom
    • Ahmed Abdo, Sakib Md Bin Malek, Xuanpeng Zhao, Nael Abu-Ghazaleh (University of California, Riverside)

      ZOOX AutoDriving Security Award Winner ($1,000 cash prize)!

      Autonomous systems are vulnerable to physical attacks that manipulate their sensors through spoofing or other adversarial inputs or interference. If the sensors’ values are incorrect, an autonomous system can be directed to malfunction or even controlled to perform an adversary-chosen action, making this a critical threat to the success of these systems. To counter these attacks, a number of prior defenses were proposed that compare the collected sensor values to those predicted by a physics based model of the vehicle dynamics; these solutions can be limited by the accuracy of this prediction which can leave room for an attacker to operate without being detected. We propose AVMON, which contributes a new detector that substantially improves detection accuracy, using the following ideas: (1) Training and specialization of an estimation filter configuration to the vehicle and environment dynamics; (2) Efficiently overcoming errors due to non-linearities, and capturing some effects outside the physics model, using a residual machine learning estimator; and (3) A change detection algorithm for keeping track of the behavior of the sensors to enable more accurate filtering of transients. These ideas together enable both efficient and high accuracy estimation of the physical state of the vehicle, which substantially shrinks the attacker’s opportunity to manipulate the sensor data without detection. We show that AVMON can detect a wide range of attacks, with low overhead compatible with realtime implementations. We demonstrate AVMON for both ground vehicles (using an RC Car testbed) and for aerial drones (using hardware in the loop simulator), as well as in simulations.

    • Sri Hrushikesh Varma Bhupathiraju (University of Florida), Takami Sato (University of California, Irvine), Michael Clifford (Toyota Info Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

      Connected, autonomous, semi-autonomous, and human-driven vehicles must accurately detect, and adhere, to traffic light signals to ensure safe and efficient traffic flow. Misinterpretation of traffic lights can result in potential safety issues. Recent work demonstrated attacks that projected structured light patterns onto vehicle cameras, causing traffic signal misinterpretation. In this work, we introduce a new physical attack method against traffic light recognition systems that exploits a vulnerability in the physical structure of traffic lights. We observe that when laser light is projected onto traffic lights, it is scattered by reflectors (mirrors) located inside the traffic lights. To a vehicle’s camera, the attacker-injected laser light appears to be a genuine light source, resulting in misclassifications by traffic light recognition models. We show that our methodology can induce misclassifications using both visible and invisible light when the traffic light is operational (on) and not operational (off). We present classification results for three state-of-the-art traffic light recognition models and show that this attack can cause misclassification of both red and green traffic light status. Tested on incandescent traffic lights, our attack can be deployed up to 25 meters from the target traffic light. It reaches an attack success rate of 100% in misclassifying green status, and 86% in misclassifying red status, in a controlled, dynamic scenario.

    • Nina Shamsi (Northeastern University), Kaeshav Chandrasekar, Yan Long, Christopher Limbach (University of Michigan), Keith Rebello (Boeing), Kevin Fu (Northeastern University)

      Control or disablement of computer vision-assisted autonomous vehicles via acoustic interference is an open problem in vehicle cybersecurity research. This work explores a new threat model in this problem space: acoustic interference via high-speed, pulsed lasers to non-destructively affect drone sensors. Initial experiments verified the feasibility of laser-induced acoustic wave generation at resonant frequencies of MEMS gyroscope sensors. Acoustic waves generated by a lab-scale laser produced a 300-fold noise floor modification in commercial off-of-the-shelf (COTS) gyroscope sensor readings. Computer vision functionalities of drones often depend on such vulnerable sensors, and can be a target of this new threat model because of camera motion blurs caused by acoustic interference. The effect of laser-induced acoustics in object detection datasets was simulated by extracting blur kernels from drone images captured under different conditions of acoustic interference, including speaker-generated sound to emulate higher intensity lasers, and evaluated using state-of-theart object detection models. The results show an average of 41.1% decrease in mean average precision for YOLOv8 across two datasets, and suggest an inverse relationship between an object detection model’s mean average precision and acoustic intensity. Object detection models with at least 60M parameters appear more resilient against laser-induced acoustic interference. Initial characterizations of laser-induced acoustic interference reveal future potential threat models affecting sensors and downstream software systems of autonomous vehicles.

    09:30 - 10:00
    Session 1B: Vehicular Network Security – 1
    Chair: Habeeb Olufowobi (UT Arlington)
    Rousseau Room
    • Shuguo Zhuo, Nuo Li, Kui Ren (The State Key Laboratory of Blockchain and Data Security, Zhejiang University)

      NMFTA Best Short Paper Award Winner ($200 cash prize)!

      Due to the absence of encryption and authentication mechanisms, the Controller Area Network (CAN) protocol, widely employed in in-vehicle networks, is susceptible to various cyber attacks. In safeguarding in-vehicle networks against cyber threats, numerous Machine Learning-based (ML) and Deep Learning-based (DL) anomaly detection methods have been proposed, demonstrating high accuracy and proficiency in capturing intricate data patterns. However, the majority of these methods are supervised and heavily reliant on labeled training datasets with known attack types, posing limitations in real-world scenarios where acquiring labeled attack data is challenging. In this paper, we present HistCAN, a lightweight and self-supervised Intrusion Detection System (IDS) designed to confront cyber attacks using solely benign training data. HistCAN employs a hybrid encoder capable of simultaneously learning spatial and temporal features of the input data, exhibiting robust patterncapturing capabilities with a relatively compact parameter set. Additionally, a historical information fusion module is integrated into HistCAN, facilitating the capture of long-term dependencies and trends within the CAN ID series. Extensive experimental results demonstrate that HistCAN generally outperforms the compared baseline methods, achieving a high F1 score of 0.9954 in a purely self-supervised manner while satisfying real-time requirements.

    • Alessio Buscemi, Thomas Engel (SnT, University of Luxembourg), Kang G. Shin (The University of Michigan)

      The Controller Area Network (CAN) is widely deployed as the de facto global standard for the communication between Electronic Control Units (ECUs) in the automotive sector. Despite being unencrypted, the data transmitted over CAN is encoded according to the Original Equipment Manufacturers (OEMs) specifications, and their formats are kept secret from the general public. Thus, the only way to obtain accurate vehicle information from the CAN bus is through reverse engineering. Aftermarket companies and academic researchers have focused on automating the CAN reverse-engineering process to improve its speed and scalability. However, the manufacturers have recently started multiplexing the CAN frames primarily for platform upgrades, rendering state-of-the-art (SOTA) reverse engineering ineffective. To overcome this new barrier, we present CAN Multiplexed Frames Translator (CAN-MXT), the first tool for the identification of new-generation multiplexed CAN frames. We also introduce CAN Multiplexed Frames Generator (CANMXG), a tool for the parsing of standard CAN traffic into multiplexed traffic, facilitating research and app development on CAN multiplexing.

    • Artur Hermann, Natasa Trkulja (Ulm University - Institute of Distributed Systems), Anderson Ramon Ferraz de Lucena, Alexander Kiening (DENSO AUTOMOTIVE Deutschland GmbH), Ana Petrovska (Huawei Technologies), Frank Kargl (Ulm University - Institute of Distributed Systems)

      Future vehicles will run safety-critical applications that rely on data from entities within and outside the vehicle. Malicious manipulation of this data can lead to safety incidents. In our work, we propose a Trust Assessment Framework (TAF) that allows a component in a vehicle to assess whether it can trust the provided data. Based on a logic framework called Subjective Logic, the TAF determines a trust opinion for all components involved in processing or forwarding a data item. One particular challenge in this approach is the appropriate quantification of trust. To this end, we propose to derive trust opinions for electronic control units (ECUs) in an in-vehicle network based on the security controls implemented in the ECU, such as secure boot. We apply a Threat Analysis and Risk Assessment (TARA) to assess security controls at design time and use run time information to calculate associated trust opinions. The feasibility of the proposed concept is showcased using an in-vehicle application with two different scenarios. Based on the initial results presented in this paper, we see an indication that a trust assessment based on quantifying security controls represents a reasonable approach to provide trust opinions for a TAF.

  • 10:00 - 10:15
    Coffee Break
  • 10:15 - 11:15
    Keynote 1 by Prof. Wenyuan Xu (Professor, Zhejiang University)
    Chair: Aiping Xiong (Penn State University)
    Kon Tiki Ballroom
    • With the rapid development of the Internet of Things (IoT), new security issues have emerged that traditional vulnerability categorization may not fully cover. IoT devices rely on sensors and actuators to interact with the real world, but this interaction process between physical and digital systems has created defects that are difficult to analyze and detect. These defects include unintentional coupling effects of sensors from ambient analog signals or abnormal channels that were not intentionally designed. Various security incidents have highlighted the prevalence of such vulnerabilities in IoT systems, and their activation can result in serious consequences. Our talk highlights the need to shift the research paradigm for traditional system security to encompass sensor vulnerabilities in the intelligence era. Finally, we explore potential solutions for mitigating sensor vulnerabilities and securing IoT devices.

      Speaker's Biography: Wenyuan Xu is a Professor in the College of Electrical Engineering at Zhejiang University. She received her Ph.D. in Electrical and Computer Engineering from Rutgers University in 2007. Before joining Zhejiang University in 2013, she was a tenured faculty member in the Department of Computer Science and Engineering at the University of South Carolina in the United States. Her research focuses on embedded systems security, smart systems security, and IoT security. She is an IEEE fellow and a recipient of the NSF CAREER award. She received various best-paper awards including ACM CCS 2017 and ACM AsiaCCS 2018. In addition, she is a program committee co-chair for NDSS 2022-2023 and USENIX Security 2024, and serves as an associate editor for IEEE TMC, ACM TOSN, and TPS.

  • 11:15 - 11:20
    Short Break
  • 11:20 - 11:50
    Session 2A: Electric Vehicle Charging Security
    Chair: Ryan Gerdes (Virginia Tech University)
    Kon Tiki Ballroom
    • Gaetano Coppoletta (University of Illinois Chicago), Rigel Gjomemo (Discovery Partners Institute, University of Illinois), Amanjot Kaur, Nima Valizadeh (Cardiff University), Venkat Venkatakrishnan (Discovery Partners Institute, University of Illinois), Omer Rana (Cardiff University)

      In the last decade, electric vehicles (EVs) have moved from a niche of the transportation sector to its most innovative, dynamic, and growing sector. The associated EV charging infrastructure is closely following behind. One of the main components of such infrastructure is the Open Charge Point Protocol (OCPP), which defines the messages exchanged between charging stations and central management systems owned by charging companies. This paper presents OCPPStorm, a tool for testing the security of OCPP implementations. OCPPStorm is designed as a black box testing tool, in order to be able to deal with different implementations, independently of their deployment peculiarities, platforms, or languages used. In particular, OCPPStorm applies fuzzing techniques to the OCPP messages to identify errors in the message management and find vulnerabilities among those errors. It’s efficacy is demonstrated through extensive testing on two open-source OCPP systems, revealing its proficiency in uncovering critical security flaws, among which 5 confirmed CVEs and 7 under review. OCPPSTorm’s goal is to bolster the methodological approach to OCPP security testing, thereby reinforcing the reliability and safety of the EV charging ecosystem.

    • Soyeon Son (Korea University) Kyungho Joo (Korea University) Wonsuk Choi (Korea University) Dong Hoon Lee (Korea University)

      ETAS Best Paper Award ($500 cash prize)!

      The proliferation of electric vehicles (EVs) and the simultaneous expansion of EV charging infrastructure have underscored the growing importance of securing EV charging systems. Power line communication is one of the most widely implemented communication technologies that is standardized by combined charging system (CCS) and the North American charging standard (NACS). Recently, it has been revealed that an unshielded charging cable can function as a susceptible antenna. As a result, attackers can eavesdrop on communication packets between charging stations and EVs or maliciously suspend charging sessions.

      To secure EV charging systems against signal injection attack, we propose a signal cancellation system that restores benign charging sessions by annihilating the attack signal. An essential step in the proposed method is accurately estimating the carrier phase offset (CPO) and channel state values of the attack signal. Due to the inaccurate estimation of CPO and channel state values, continuous updates using linear interpolation are necessary. To evaluate the effectiveness of the proposed technique, we show that normal communication is achieved with the success of the signal level attenuation characterization (SLAC) protocol within 1.5 seconds. Experiments are conducted to determine the appropriate update parameters for attaining a 100% success rate in normal communication. We also analyze the error between the predicted CPO and channel state values and the actual CPO and channel state values of the attack signals. Furthermore, the effectiveness of the proposed method is evaluated based on the power of the injected attack signal. We have confirmed that when the power of the received attack signal is less than −31.8dBm, applying the proposed technique with the suitable update parameters leads to 100% success in normal communication.

    11:20 - 11:50
    Session 2B: Firewall and IDS
    Chair: Hyungsub Kim (Purdue University)
    Rousseau Room
    • Evan Allen (Virginia Tech), Zeb Bowden (Virginia Tech Transportation Institute), J. Scot Ransbottom (Virginia Tech)

      Attackers have found numerous vulnerabilities in the Electronic Control Units (ECUs) of modern vehicles, enabling them to stop the car, control its brakes, and take other potentially disruptive actions. Many of these attacks were possible because the vehicles had insecure In-Vehicle Networks (IVNs), where ECUs could send any message to each other. For example, an attacker who compromised an infotainment ECU might be able to send a braking message to a wheel. In this work, we introduce a scheme based on distributed firewalls to block these unauthorized messages according to a set “security policy” defining what transmissions each ECU should be able to send and receive. We leverage the topology of new switched, zonal networks to authenticate messages without cryptography, using Ternary Content Addressable Memory (TCAMs) to enforce the policy at wire-speed. Crucially, our approach minimizes the security burden on edge ECUs and places control in a set of hardened zonal gateways. Through an OMNeT++ simulation of a zonal IVN, we demonstrate that our scheme has much lower overhead than modern cryptography-based approaches and allows for realtime, low-latency (​<0.1 ms) traffic.

    • Paolo Cerracchio, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

      The evolution of vehicles has led to the integration of numerous devices that communicate via the controller area network (CAN) protocol. This protocol lacks security measures, leaving interconnected critical components vulnerable. The expansion of local and remote connectivity has increased the attack surface, heightening the risk of unauthorized intrusions. Since recent studies have proven external attacks to constitute a realworld threat to vehicle availability, driving data confidentiality, and passenger safety, researchers and car manufacturers focused on implementing effective defenses. intrusion detection systems (IDSs), frequently employing machine learning models, are a prominent solution. However, IDS are not foolproof, and attackers with knowledge of these systems can orchestrate adversarial attacks to evade detection. In this paper, we evaluate the effectiveness of popular adversarial techniques in the automotive domain to ascertain the resilience, characteristics, and vulnerabilities of several ML-based IDSs. We propose three gradient-based evasion algorithms and evaluate them against six detection systems. We find that the algorithms’ performance heavily depends on the model’s complexity and the intended attack’s quality. Also, we study the transferability between different detection systems and different time instants in the communication.

    11:50 - 12:15
    Lightning Talks
    Session Chair: Mert Pese (Clemson University)
    Kon Tiki Ballroom
  • 12:15 - 13:15
    Lunch
    Lawn
  • 13:15 - 14:15
    Keynote 2 by Mr. Urban K. Jonson (Senior Vice President of Information Technology and Cybersecurity Services, SERJON)
    Chair: Ziming Zhao (University at Buffalo)
    Kon Tiki Ballroom
    • The complexity of vehicle cybersecurity seems to be increasing at an ever-accelerating pace. With the electrification of transportation, the adoption of AI, and new regulations and standards, “secure by design” seems to be moving out of reach. How can we navigate this complex realm and make actual progress? What does the industry need to focus on? What can academia do to help advance the state of the possible? Join us as we explore some answers to these important questions.

      Speaker's Biography: Urban conducted some of the first research into heavy vehicle cybersecurity in 2014 and wrote one of the first papers on the subject in 2015. While at NMFTA, Urban founded and ran the heavy vehicle cybersecurity / commercial transportation security and research program. With over thirty-five years of experience, Urban is a hands-on technologist and leader. He has a successful track record of understanding, analyzing, mapping, and providing solutions for complex systems. Urban maintains several vehicle cybersecurity advisory roles, including technical support to SAE International standards committees, a Technology & Maintenance Council (TMC) S.5 and S.12 Study Group Member, ESCAR USA Conference Program Committee, CyberTruck Challenge Board Member and Speaker, and a Transportation Cybersecurity Subject Matter Expert for FBI InfraGard and FBI Automotive Sector Specific Working Group.

    14:15 - 15:00
    Session 3A: Autonomous Vehicle Security – 2
    Chair: Jeremy Daily (Colorado State University)
    Kon Tiki Ballroom
    • ETAS Best Paper Award Runner-up!

      In compliance with U.S. regulations, modern commercial trucks are required by law to be equipped with Electronic Logging Devices (ELDs), which have become potential cybersecurity threat vectors. Our research uncovers three critical vulnerabilities in commonly used ELDs.

      First, we demonstrate that these devices can be wirelessly controlled to send arbitrary Controller Area Network (CAN) messages, enabling unauthorized control over vehicle systems. The second vulnerability demonstrates malicious firmware can be uploaded to these ELDs, allowing attackers to manipulate data and vehicle operations arbitrarily. The final vulnerability, and perhaps the most concerning, is the potential for a selfpropagating truck-to-truck worm, which takes advantage of the inherent networked nature of these devices. Such an attack could lead to widespread disruptions in commercial fleets, with severe safety and operational implications. For the purpose of demonstration, bench level testing systems were utilized. Additional testing was conducted on a 2014 Kenworth T270 Class 6 research truck with a connected vulnerable ELD.

      These findings highlight an urgent need to improve the security posture in ELD systems. Following some existing best practices and adhering to known requirements can greatly improve the security of these systems. The process of discovering the vulnerabilities and exploiting them is explained in detail. Product designers, programmers, engineers, and consumers should use this information to raise awareness of these vulnerabilities and encourage the development of safer devices that connect to vehicular networks.

    • Go Tsuruoka (Waseda University), Takami Sato, Qi Alfred Chen (University of California, Irvine), Kazuki Nomoto, Ryunosuke Kobayashi, Yuna Tanaka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

      Traffic signs, essential for communicating critical rules to ensure safe and efficient traffic for entities such as pedestrians and motor vehicles, must be reliably recognized, especially in the realm of autonomous driving. However, recent studies have revealed vulnerabilities in vision-based traffic sign recognition systems to adversarial attacks, typically involving small stickers or laser projections. Our work advances this frontier by exploring a novel attack vector, the Adversarial Retroreflective Patch (ARP) attack. This method is stealthy and particularly effective at night by exploiting the optical properties of retroreflective materials, which reflect light back to its source. By applying retroreflective patches to traffic signs, the reflected light from the vehicle’s headlights interferes with the camera, causing perturbations that hinder the traffic sign recognition model’s ability to correctly detect the signs. In our preliminary study, we conducted a feasibility study of ARP attacks and observed that while a 100% attack success rate is achievable in digital simulations, it decreases to less than or equal to 90% in physical experiments. Finally, we discuss the current challenges and outline our future plans. This research gains significance in the context of autonomous vehicles’ 24/7 operation, emphasizing the critical need to assess sensor and AI vulnerabilities, especially in low-light nighttime environments, to ensure the continued safety and reliability of self-driving technologies.

    • Ryunosuke Kobayashi, Kazuki Nomoto, Yuna Tanaka, Go Tsuruoka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

      —Object detection is a crucial function that detects the position and type of objects from data acquired by sensors. In autonomous driving systems, object detection is performed using data from cameras and LiDAR, and based on the results, the vehicle is controlled to follow the safest route. However, machine learning-based object detection has been reported to have vulnerabilities to adversarial samples. In this study, we propose a new attack method called “Shadow Hack” for LiDAR object detection models. While previous attack methods mainly added perturbed point clouds to LiDAR data, in this research, we introduce a method to generate “Adversarial Shadows” on the LiDAR point cloud. Specifically, the attacker strategically places materials like aluminum leisure mats to reproduce optimized positions and shapes of shadows on the LiDAR point cloud. This technique can potentially mislead LiDAR-based object detection in autonomous vehicles, leading to congestion and accidents due to actions such as braking and avoidance maneuvers. We reproduce the Shadow Hack attack method using simulations and evaluate the success rate of the attack. Furthermore, by revealing the conditions under which the attack succeeds, we aim to propose countermeasures and contribute to enhancing the robustness of autonomous driving systems.

    • Ali Shoker, Rehana Yasmin, Paulo Esteves-Verissimo (Resilient Computing & Cybersecurity Center (RC3), KAUST)

      The increasing interest in Autonomous Vehicles (AVs) is notable, driven by economic, safety, and performance reasons. Despite the growing adoption of recent AV architectures hinging on the advanced AI models, there is a significant number of fatal incidents. This paper calls for the need to revisit the fundamentals of building safety-critical AV architectures for mainstream adoption of AVs. The key tenets are: (i) finding a balance between intelligence and trustworthiness, considering efficiency and functionality brought in by AI/ML, while prioritizing indispensable safety and security; (ii) developing an advanced architecture that addresses the hard challenge of reconciling the stochastic nature of AI/ML with the determinism of driving control theory. Introducing Savvy, a novel AV architecture leveraging the strengths of intelligence and trustworthiness, this paper advocates for a safety-first approach by integrating design-time (deterministic) control rules with optimized decisions generated by dynamic ML models, all within constrained time-safety bounds. Savvy prioritizes early identification of critical obstacles, like recognizing an elephant as an object, ensuring safety takes precedence over optimal recognition just before a collision. This position paper outlines Savvy’s motivations and concepts, with ongoing refinements and empirical evaluations in progress.

    14:15 - 15:00
    Session 3B: Vehicular Network Security – 2
    Chair: Mert Pese (Clemson University)
    Rousseau Room
    • Sharika Kumar (The Ohio State University), Imtiaz Karim, Elisa Bertino (Purdue University), Anish Arora (Ohio State University)

      Trucks play a critical role in today’s transportation system, where minor disruptions can result in a major social impact. Intra Medium and Heavy Duty (MHD) communications broadly adopt SAE-J1939 recommended practices that utilize Name Management Protocol (NMP) to associate and manage source addresses with primary functions of controller applications. This paper exposes 19 vulnerabilities in the NMP, uses them to invent various logical attacks, in some cases leveraging and in all cases validating with formal methods, and discusses their impacts. These attacks can–➀ stealthily deny vehicle start-up by pre-playing recorded claims in monotonically descending order; ➁ successfully restrain critical vehicular device participation and institute a dead beef attack, causing reflash failure by performing a replay attack; ➂ cause stealthy address exhaustion, Thakaavath–exhaustion in Sanskrit, which rejects an address-capable controller application from network engagement by exhausting the usable address space via pre-playing claims in monotonically descending order; ➃ poison the controller application’s internally maintained source address-function association table after bypassing the imposter detection protection and execute a stealthy SA-NAME Table Poisoning Attack thereby disable radar and Anti Brake System (ABS), as well as obtain retarder braking torque dashboard warnings; ➄ cause Denial of Service (DoS) on claim messages by predicting the delay in an address reclaim and prohibiting the associated device from participating in the SAE-J1939 network; ➅ impersonate a working set master to alter the source addresses of controller applications to execute a Bot-Net attack; ➆ execute birthday attack, a brute-force collision attack to command an invalid or existing name, thereby causing undesired vehicle behavior. The impact of these attacks is validated by demonstrations on real trucks in operation in a practical setting and on bench setups with a real engine controller connected to a CAN bus.

    • Wentao Chen, Sam Der, Yunpeng Luo, Fayzah Alshammari, Qi Alfred Chen (University of California, Irvine)

      Due to the cyber-physical nature of robotic vehicles, security is especially crucial, as a compromised system not only exposes privacy and information leakage risks, but also increases the risk of harm in the physical world. As such, in this paper, we explore the current vulnerability landscape of robotic vehicles exposed to and thus remotely accessible by any party on the public Internet. Focusing particularly on instances of the Robot Operating System (ROS), a commonly used open-source robotic software framework, we performed new Internet-wide scans of the entire IPv4 address space, identifying, categorizing, and analyzing the ROS-based systems we discovered. We further performed the first measurement of ROS scanners in the wild by setting up ROS honeypots, logging traffic, and analyzing the traffic we received. We found over 190 ROS systems on average being regularly exposed to the public Internet and discovered new trends in the exposure of different types of robotic vehicles, suggesting increasing concern regarding the cybersecurity of today’s ROS-based robotic vehicle systems.

    • Lewis William Koplon, Ameer Ghasem Nessaee, Alex Choi (University of Arizona, Tucson), Andres Mentoza (New Mexico State University, Las Cruces), Michael Villasana, Loukas Lazos, Ming Li (University of Arizona, Tucson)

      We address the problem of cyber-physical access control for connected autonomous vehicles. The goal is to bind a vehicle’s digital identity to its physical identity represented by its physical properties such as its trajectory. We highlight that simply complementing digital authentication with sensing information remains insecure. A remote adversary with valid or compromised cryptographic credentials can hijack the physical identities of nearby vehicles detected by sensors. We propose a cyber-physical challenge-response protocol named Cyclops that relies on lowcost monocular cameras to perform cyber and physical identity binding. In Cyclops, a verifier vehicle challenges a prover vehicle to prove its claimed physical trajectory. The prover constructs a response by capturing a series of scenes in the common Field of View (cFoV) between the prover and the verifier. Verification is achieved by matching the dynamic targets in the cFoV (other vehicles crossing the cFoV). The security of Cyclops relies on the spatiotemporal traffic randomness that cannot be predicted by a remote adversary. We validate the security of Cyclops via simulations on the CARLA simulator and on-road real-world experiments in an urban setting.

    • Jun Ying, Yiheng Feng (Purdue University), Qi Alfred Chen (University of California, Irvine), Z. Morley Mao (University of Michigan and Google)

      Connected Vehicle (CV) and Connected and Autonomous Vehicle (CAV) technologies can greatly improve traffic efficiency and safety. Data spoofing attack is one major threat to CVs and CAVs, since abnormal data (e.g., falsified trajectories) may influence vehicle navigation and deteriorate CAV/CV-based applications. In this work, we aim to design a generic anomaly detection model which can be used to identify abnormal trajectories from both known and unknown data spoofing attacks. First, the attack behaviors of two representative known attacks are modeled. Then, Using driving features derived from transportation and vehicle domain knowledge, an anomaly detection framework is proposed. The framework combines a feature extractor and an anomaly classifier trained with known attack trajectories and can be applied to identify falsified trajectories generated by various attacks. In the numerical experiment, a highway segment with a signalized intersection is built in the V2X Application Spoofing Platform (VASP). To evaluate the generality of the proposed anomaly detection algorithm, we further tested the proposed model with several unknown attacks provided in VASP. The results indicate that the proposed model achieves high accuracy in detecting falsified attack trajectories from both known and unknown attacks.

  • 15:00 - 15:10
    Short Break
  • 15:10 - 15:40
    Session 4A: Autonomous Vehicle Security – 3
    Chair: Nidhi Rastogi (RIT)
    Kon Tiki Ballroom
    • Marina Moore, Aditya Sirish A Yelgundhalli (New York University), Justin Cappos (NYU)

      Software supply chain attacks are a major concern and need to be addressed by every organization, including automakers. While there are many effective technologies in both the software delivery and broader software supply chain security space, combining these technologies presents challenges specific to automotive applications. We explore the trust boundaries between the software supply chain and software delivery systems to determine where verification of software supply chain metadata should occur, how to establish a root of trust, and how supply chain policy can be distributed. Using this exploration, we design Scudo, a secure combination of software over the air and software supply chain security technologies. We show that adding full verification of software supply chain metadata on-vehicle is not only inefficient, but is also largely unnecessary for security with multiple points of repository-side verification.

      In addition, this paper describes a secure instantiation of Scudo, which integrates Uptane, a state of the art software update security solution, and in-toto, a comprehensive supply chain security framework. A practical deployment has shown that Scudo provides robust software supply chain protections. The client side power and processing costs are negligible, with the updated metadata comprising 0.504% of the total update transmission. The client side verification adds 0.21 seconds to the total update flow. This demonstrates that Scudo is easy to deploy in ways that can efficiently and effectively catch software supply chain attacks.

    • Takami Sato, Ningfei Wang (University of California, Irvine), Yueqiang Cheng (NIO Security Research), Qi Alfred Chen (University of California, Irvine)

      Automated Lane Centering (ALC) is one of the most popular autonomous driving (AD) technologies available in many commodity vehicles. ALC can reduce the human driver’s efforts by taking over their steering work. However, recent research alerts that ALC can be vulnerable to off-road attacks that lead victim vehicles out of their driving lane. To be secure against off-road attacks, this paper explores the potential defense capability of low-quality localization and publicly available maps against off-road attacks against autonomous driving. We design the first map-fusion-based off-road attack detection approach, LaneGuard, LaneGuard detects off-road attacks based on the difference between the observed road shape and the driver-predefined route shape. We evaluate LaneGuar on large-scale real-world driving traces consisting of 80 attack scenarios and 11,558 benign scenarios. We find that LaneGuard can achieve an attack detection rate of 89% with a 12% false positive rate. In real-world highway driving experiments, LaneGuard exhibits no false positives while maintaining a near-zero false negative rate against simulated attacks.

    • Mohammed Aldeen, Pedram MohajerAnsari, Jin Ma, Mashrur Chowdhury, Long Cheng, Mert D. Pesé (Clemson University)

      As the advent of autonomous vehicle (AV) technology revolutionizes transportation, it simultaneously introduces new vulnerabilities to cyber-attacks, posing significant challenges to vehicle safety and security. The complexity of these systems, coupled with their increasing reliance on advanced computer vision and machine learning algorithms, makes them susceptible to sophisticated AV attacks. This paper* explores the potential of Large Multimodal Models (LMMs) in identifying Natural Denoising Diffusion (NDD) attacks on traffic signs. Our comparative analysis show the superior performance of LMMs in detecting NDD samples with an average accuracy of 82.52% across the selected models compared to 37.75% for state-of-the-art deep learning models. We further discuss the integration of LMMs within the resource-constrained computational environments to mimic typical autonomous vehicles and assess their practicality through latency benchmarks. Results show substantial superiority of GPT models in achieving lower latency, down to 4.5 seconds per image for both computation time and network latency (RTT), suggesting a viable path towards real-world deployability. Lastly, we extend our analysis to LMMs’ applicability against a wider spectrum of AV attacks, particularly focusing on the Automated Lane Centering systems, emphasizing the potential of LMMs to enhance vehicular cybersecurity.

    15:10 - 15:40
    Session 4B: Human Factors and Others
    Chair: Luis Garcia (University of Utah)
    Rousseau Room
    • Paul Agbaje, Abraham Mookhoek, Afia Anjum, Arkajyoti Mitra (University of Texas at Arlington), Mert D. Pesé (Clemson University), Habeeb Olufowobi (University of Texas at Arlington)

      Millions of lives are lost due to road accidents each year, emphasizing the importance of improving driver safety measures. In addition, physical vehicle security is a persistent challenge exacerbated by the growing interconnectivity of vehicles, allowing adversaries to engage in vehicle theft and compromising driver privacy. The integration of advanced sensors with internet connectivity has ushered in the era of intelligent transportation systems (ITS), enabling vehicles to generate abundant data that facilitates diverse vehicular applications. These data can also provide insights into driver behavior, enabling effective driver monitoring to support safety and security. In this paper, we propose AutoWatch, a graph-based approach for modeling the behavior of drivers, verifying the identity of the driver, and detecting unsafe driving maneuvers. Our evaluation shows that AutoWatch can improve driver identification accuracy by up to 22% and driving maneuver classification by up to 5.7% compared to baseline techniques.

    • Cherin Lim, Tianhao Xu, Prashanth Rajivan (University of Washington)

      Human trust is critical for the adoption and continued use of autonomous vehicles (AVs). Experiencing vehicle failures that stem from security threats to underlying technologies that enable autonomous driving, can significantly degrade drivers’ trust in AVs. It is crucial to understand and measure how security threats to AVs impact human trust. To this end, we conducted a driving simulator study with forty participants who underwent three drives including one that had simulated cybersecurity attacks. We hypothesize drivers’ trust in the vehicle is reflected through drivers’ body posture, foot movement, and engagement with vehicle controls during the drive. To test this hypothesis, we extracted body posture features from each frame in the video recordings, computed skeletal angles, and performed k-means clustering on these values to classify drivers’ foot positions. In this paper, we present an algorithmic pipeline for automatic analysis of body posture and objective measurement of trust that could be used for building AVs capable of trust calibration after security attack events.

    • Rao Li (The Pennsylvania State University), Shih-Chieh Dai (Pennsylvania State University), Aiping Xiong (Penn State University)

      Physical adversarial objects-evasion attacks pose a safety concern for automated driving systems (ADS) and are a significant obstacle to their widespread adoption. To enhance the ability of ADS to address such concerns, we aim to propose a human-AI collaboration framework to bring human in the loop to mitigate the attacks. In this WIP work, we investigate the performance of two object detectors in the YOLO-series (YOLOv5 and YOLOv8) against three physical adversarial object-evasion attacks across different driving contexts in the CARLA simulator. Using static images, we found that YOLOv8 generally outperformed YOLOv5 in attack detection but remained susceptible to certain attacks in specific contexts. Moreover, the study results showed that none of the attacks had achieved a high attack success rate in dynamic tests when system-level features were considered. Nevertheless, such detection results varied across different locations for each attack. Altogether, these results suggest that perception in autonomous driving, the same as human perception in manual driving, might also be context-dependent. Moreover, our results revealed object detection failures at a braking distance anticipated by human drivers, suggesting a necessity to involve human drivers in future evaluation processes.

  • 15:40 - 15:50
    Short Break
  • 15:50 - 16:35
    Session 5A: Physical Attacks and Defense
    Chair: Antonio Bianchi (Purdue University)
    Kon Tiki Ballroom
    • Masashi Fukunaga (MitsubishiElectric), Takeshi Sugawara (The University of Electro-Communications)

      Integrity of sensor measurement is crucial for safe and reliable autonomous driving, and researchers are actively studying physical-world injection attacks against light detection and ranging (LiDAR). Conventional work focused on object/obstacle detectors, and its impact on LiDAR-based simultaneous localization and mapping (SLAM) has been an open research problem. Addressing the issue, we evaluate the robustness of a scan-matching SLAM algorithm in the simulation environment based on the attacker capability characterized by indoor and outdoor physical experiments. Our attack is based on Sato et al.’s asynchronous random spoofing attack that penetrates randomization countermeasures in modern LiDARs. The attack is effective with fake points injected behind the victim vehicle and potentially evades detection-based countermeasures working within the range of object detectors. We discover that mapping is susceptible toward the z-axis, the direction perpendicular to the ground, because feature points are scarce either in the sky or on the road. The attack results in significant changes in the map, such as a downhill converted into an uphill. The false map induces errors to the self-position estimation on the x-y plane in each frame, which accumulates over time. In our experiment, after making laser injection for 5 meters (i.e. 1 second), the victim SLAM’s self-position begins and continues to diverge from the reality, resulting in the 5m shift to the right after running 125 meters. The false map and self-position significantly affect the motion planning algorithm, too; the planned trajectory changes by 3◦ with which the victim vehicle will enter the opposite lane after running 35 meters. Finally, we discuss possible mitigations against the proposed attack.

    • Michele Marazzi, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

      ZOOX AutoDriving Security Award Runner-up!

      With the increasing interest in autonomous vehicles (AVs), ensuring their safety and security is becoming crucial. The introduction of advanced features has increased the need for various interfaces to communicate with the external world, creating new potential attack vectors that attackers can exploit to alter sensor data. LiDAR sensors are widely employed to support autonomous driving features and generate point cloud data used by ADAS to 3D map the vehicle’s surroundings. Tampering attacks on LiDAR-generated data can compromise the vehicle’s functionalities and seriously threaten passengers and other road users. Existing approaches to LiDAR data tampering detection show security flaws and can be bypassed by attackers through design vulnerabilities. This paper proposes a novel approach for tampering detection of LiDAR-generated data in AVs, employing a watermarking technique. We validate our approach through experiments to prove its feasibility in realworld time-constrained scenarios and its efficacy in detecting LiDAR tampering attacks. Our approach performs better when compared to the current state-of-the-art LiDAR watermarking techniques while addressing critical issues related to watermark security and imperceptibility.

    • Yuki Hayakawa (Keio University), Takami Sato (University of California, Irvine), Ryo Suzuki, Kazuma Ikeda, Ozora Sako, Rokuto Nagata (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

      LiDAR stands as a critical sensor in the realm of autonomous vehicles (AVs). Considering its safety and security criticality, recent studies have actively researched its security and warned of various safety implications against LiDAR spoofing attacks, which can cause critical safety implications on AVs by injecting ghost objects or removing legitimate objects from their detection. To defend against LiDAR spoofing attacks, pulse fingerprinting has been expected as one of the most promising countermeasures against LiDAR spoofing attacks, and recent research demonstrates its high defense capability, especially against object removal attacks. In this WIP paper, we report the progress in conducting further security analysis on pulse fingerprinting against LiDAR spoofing attacks. We design a novel adaptive attack strategy, the Adaptive High-Frequency Removal (A-HFR) attack, which can be effective against broader types of LiDARs than the existing HFR attacks. We evaluate the A-HFR attack on three commercial LiDAR with pulse fingerprinting and find that the A-HFR attack can successfully remove over 96% of the point cloud within a 20◦ horizontal and a 16◦ vertical angle. Our finding indicates that current pulse fingerprinting techniques might not be sufficiently robust to thwart spoofing attacks. We also discuss potential strategies to enhance the defensive efficacy of pulse fingerprinting against such attacks. This finding implies that the current pulse fingerprinting may not be an ultimate countermeasure against LiDAR spoofing attacks. We finally discuss our future plans.

    • Ryo Suzuki (Keio University), Takami Sato (University of California, Irvine), Yuki Hayakawa, Kazuma Ikeda, Ozora Sako, Rokuto Nagata (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

      LiDAR (Light Detection and Ranging) is an essential sensor for autonomous driving (AD), increasingly being integrated not only in prototype vehicles but also in commodity vehicles. Due to its critical safety implications, recent studies have explored its security risks and exposed the potential vulnerability against LiDAR spoofing attacks, which manipulate measurement data by emitting malicious lasers into the LiDAR. Nevertheless, deploying LiDAR spoofing attacks against driving AD vehicles still has significant technical challenges particularly in accurately aiming at the LiDAR of a moving AV from the roadside. The current state-of-the-art attack can be successful only at ≤5 km/h. Motivated by this, we design novel tracking and aiming methodology and conduct a feasibility study to explore the actual practicality of LiDAR spoofing attacks against AD vehicles at cruising speeds. In this work, we report our initial results demonstrating that our object removal attack successfully makes the targeted pedestrian undetectable with ≥90% success rates in a real-world scenario where the adversary at the roadside attacks the victim AD approaching at 35 km/h. Finally, we discuss the current challenges and our future plans.

    15:50 - 16:35
    Session 5B: Vehicular Network Security – 3
    Chair: Hanif Rahbari (RIT)
    Rousseau Room
    • Carson Green, Rik Chatterjee, Jeremy Daily (Colorado State University)

      Modern automotive operations are governed by embedded computers that communicate over standardized protocols, forming the backbone of vehicular networking. In the domain of commercial vehicles, these systems predominantly rely on the high-level protocols running on top of the Controller Area Network (CAN) protocol for internal communication in medium and heavy-duty applications. Critical to this ecosystem is the Unified Diagnostics Services (UDS) protocol, outlined in ISO 14229 (Unified Diagnostic Services - UDS) and ISO 15765 (Diagnostic Communication over CAN), which provides essential diagnostic functionalities. This paper presents three distinct scenarios, demonstrating potential shortcomings of the UDS protocol standards and how they can be exploited to launch attacks on in-vehicle computers in commercial vehicles while bypassing security mechanisms.

      In the initial two scenarios, we identify and demonstrate two vulnerabilities in the ISO 14229 protocol specifications. Subsequently, in the final scenario, we highlight and demonstrate a vulnerability specific to the ISO 15765 protocol specifications.

      For demonstration purposes, bench-level test systems equipped with real Electronic Control Units (ECUs) connected to a CAN bus were utilized. Additional testing was conducted on a comprehensively equipped front cab assembly of a 2018 Freightliner Cascadia truck, configured as an advanced test bench. The test results reveal how attacks targeting specific protocols can compromise individual ECUs. Furthermore, in the Freightliner Cascadia truck setup, we found a network architecture typical of modern vehicles, where a gateway unit segregates internal ECUs from diagnostics. This gateway, while designed to block standard message injection and spoofing attacks, specifically allows all UDS-based diagnostic messages. This selective allowance inadvertently creates a vulnerability to UDS protocol attacks, underscoring a critical area for security enhancements in commercial vehicle networks. These findings are crucial for engineers and programmers responsible for implementing the diagnostic protocols in their communication subsystems, emphasizing the need for enhanced security measures.

    • H M Sabbir Ahmad, Ehsan Sabouni, Akua Dickson (Boston University), Wei Xiao (Massachusetts Institute of Technology), Christos Cassandras, Wenchao Li (Boston University)

      We address the security of a network of Connected and Automated Vehicles (CAVs) cooperating to safely navigate through a conflict area (e.g., traffic intersections, merging roadways, roundabouts). Previous studies have shown that such a network can be targeted by adversarial attacks causing traffic jams or safety violations ending in collisions. We focus on attacks targeting the V2X communication network used to share vehicle data and consider as well uncertainties due to noise in sensor measurements and communication channels. To combat these, motivated by recent work on the safe control of CAVs, we propose a trust-aware robust event-triggered decentralized control and coordination framework that can provably guarantee safety. We maintain a trust metric for each vehicle in the network computed based on their behavior and used to balance the tradeoff between conservativeness (when deeming every vehicle as untrustworthy) and guaranteed safety and security. It is important to highlight that our framework is invariant to the specific choice of the trust framework. Based on this framework, we propose an attack detection and mitigation scheme which has twofold benefits: (i) the trust framework is immune to false positives, and (ii) it provably guarantees safety against false positive cases. We use extensive simulations (in SUMO and CARLA) to validate the theoretical guarantees and demonstrate the efficacy of our proposed scheme to detect and mitigate adversarial attacks.

    • Sampath Rajapaksha, Harsha Kalutarage (Robert Gordon University, UK), Garikayi Madzudzo (Horiba Mira Ltd, UK), Andrei Petrovski (Robert Gordon University, UK), M.Omar Al-Kadri (University of Doha for Science and Technology)

      The Controller Area Network (CAN Bus) has emerged as the de facto standard for in-vehicle communication. However, the CAN bus lacks security features, such as encryption and authentication, making it vulnerable to cyberattacks. In response, the current literature has prioritized the development of Intrusion Detection Systems (IDSs). Nevertheless, the progress of IDS research encounters significant obstacles due to the absence of high-quality, publicly available real CAN data, especially data featuring realistic, verified attacks. This scarcity primarily arises from the substantial cost and associated risks involved in generating real attack data on moving vehicles. Addressing this challenge, this paper introduces a novel CAN bus attack dataset collected from a modern automobile equipped with autonomous driving capabilities, operating under real-world driving conditions. The dataset includes 17 hours of benign data, covering a wide range of scenarios, crucial for training IDSs. Additionally, it comprises 26 physically verified real injection attacks, including Denial-of-Service (DoS), fuzzing, replay, and spoofing, targeting 13 CAN IDs. Furthermore, the dataset encompasses 10 simulated masquerade and suspension attacks, offering 2 hours and 54 minutes of attack data. This comprehensive dataset facilitates rigorous testing and evaluation of various IDSs against a diverse array of realistic attacks, contributing to the enhancement of in-vehicle security.

    16:35 - 17:25
    Demos and Posters Session
    Session Chair: Sara Rampazzi (University of Florida)
    Note: Best Demo Award voting ends at 17:15.
    Kon Tiki Ballroom and Rousseau Room
  • 17:25 - 17:30
    Demo Award and Closing Remarks
    Kon Tiki Ballroom
  • 17:30 - 19:00
    VehicleSec Community Reception
    Boardroom and Foyer