Symposium on Vehicle Security and Privacy (VehicleSec) 2023 Program
Monday, 27 February
Prof. Kang Shin (Kevin and Nancy O'Connor Professor of Computer Science, and the Founding Director of the Real-Time Computing Laboratory (RTCL) in the Electrical Engineering and Computer Science Department at the University of Michigan)
Component faults, bugs, and malicious attacks can all degrade in, or even prevent semi-autonomous systems (SASs) from, correctly capturing their operation context, which is essential to support critical safety features like emergency braking in an autonomous car. While safety features in modern SASs usually rely on static assignment of control priority, such a design may lead to catastrophic accidents when accompanied with erroneous/compromised control and context estimation.
To mitigate the grave danger of SASs' use of incorrect data for making control decisions and learn from the incidents/crashes of Boeing 737 MAX, we propose CADCA, a novel control decision-maker for SASs, that is designed to operate under sensor/data errors or falsifications as well as malicious/erroneous control inputs with the ultimate goal of resolving conflicting control inputs to ensure safety. Our extensive evaluation (of more than 15,700 test-cases) has shown CADCA to achieve a 98% success rate in preventing the execution of incorrect control decisions caused by component failures and/or malicious attacks in the most common scenarios.
This talk will detail the motivation, design and evaluation of CADCA with semi-autonomous vehicles as a representative SAS. This is joint work with Daniel Chen and Noah Curran.
Kang G. Shin is the Kevin & Nancy O'Connor Professor of Computer Science in the Department of Electrical Engineering and Computer Science, The University of Michigan, Ann Arbor. His current research focuses on QoS-sensitive computing and networking as well as on embedded real-time and cyber-physical systems.
He has supervised the completion of 91 PhDs, and authored/coauthored close to 1,000 technical articles, a textbook and about 60 patents or invention disclosures, and received numerous awards, including 2019 Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, and the Best Paper Awards from the 2011 ACM International Conference on Mobile Computing and Networking (MobiCom’11), the 2011 IEEE International Conference on Autonomic Computing, the 2010 and 2000 USENIX Annual Technical Conferences, as well as the 2003 IEEE Communications Society William R. Bennett Prize Paper Award and the 1987 Outstanding IEEE Transactions of Automatic Control Paper Award. He has also received several institutional awards, including the Research Excellence Award in 1989, Outstanding Achievement Award in 1999, Distinguished Faculty Achievement Award in 2001, and Stephen Attwood Award in 2004 from The University of Michigan (the highest honor bestowed to Michigan Engineering faculty); a Distinguished Alumni Award of the College of Engineering, Seoul National University in 2002; 2003 IEEE RTC Technical Achievement Award; and 2006 Ho-Am Prize in Engineering (the highest honor bestowed to Korean-origin engineers).
He has chaired Michigan Computer Science and Engineering Division for 3 years starting 1991, and also several major conferences, including 2009 ACM MobiCom, and 2005 ACM/USENIX MobiSys. He was a co-founder of a couple of startups, licensed some of his technologies to industry, and served as an Executive Advisor for Samsung Research.
Hongchao Zhang (Washington University in St. Louis), Zhouchi Li (Worcester Polytechnic Institute), Shiyu Cheng (Washington University in St. Louis), Andrew Clark (Washington University in St. Louis)
Autonomous vehicles rely on LiDAR sensors to detect obstacles such as pedestrians, other vehicles, and fixed infrastructures. LiDAR spoofing attacks have been demonstrated that either create erroneous obstacles or prevent detection of real obstacles, resulting in unsafe driving behaviors. In this paper, we propose an approach to detect and mitigate LiDAR spoofing attacks by leveraging LiDAR scan data from other neighboring vehicles. This approach exploits the fact that spoofing attacks can typically only be mounted on one vehicle at a time, and introduce additional points into the victim’s scan that can be readily detected by comparison from other, non-modified scans. We develop a Fault Detection, Identification, and Isolation procedure that identifies non-existing obstacle, physical removal, and adversarial object attacks, while also estimating the actual locations of obstacles. We propose a control algorithm that guarantees that these estimated object locations are avoided. We validate our framework using a CARLA simulation study, in which we verify that our FDII algorithm correctly detects each attack pattern.
Andrew Roberts (Tallinn University of Technology), Mohsen Malayjerdi (Tallinn University of Technology), Mauro Bellone (Tallinn University of Technology), Olaf Maennel (The University of Adelaide), Ehsan Malayjerdi (Tallinn University of Technology)
The safety and security of navigation and planning algorithms are essential for the adoption of autonomous driving in real-world operational environments. Adversarial threats to local-planning algorithms are a developing field. Attacks have primarily been targeted at trajectory prediction algorithms which are used by the autonomous vehicle to predict the motion of ego vehicles and other environmental objects to calculate a safe planning route. This work extends the attack surface to focus on a rule-based local-planning algorithm, specifically focusing on the planning cost-based function, which is used to estimate the safest and most efficient route. Targeting this algorithm, which is used in a real-world, operational autonomous vehicle program, we devise two attacks; 1) deviation to the lateral and longitudinal pose values, and 2) time-delay of the sensed-data input messages to the local-planning nodes. Using a low-fidelity simulation testing environment, we conduct a sensitivity analysis using multiple deviation range values and time-delay duration. We find that the impact of adversarial attack cases is visible in the rate of failure to complete the mission and in the occurrence of safety violations. The cost-function is sensitive to deviations in lateral and longitudinal pose and higher duration of message delay. The result of the sensitivity analysis suggests minor deviations of the pose (lateral, longitudinal) values as an optimal range for the attackers search space. Options for mitigating such attacks are that the AV should run a concurrent process executing a concurrent planning instance for redundancy.
Takami Sato (University of California, Irvine), Sri Hrushikesh Varma Bhupathiraju (University of Florida), Michael Clifford (Toyota InfoTech Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)
All vehicles must follow the rules that govern traffic behavior, regardless of whether the vehicles are human-driven or Connected, Autonomous Vehicles (CAVs). Road signs indicate locally active rules, such as speed limits and requirements to yield or stop. Recent research has demonstrated attacks, such as adding stickers or dark patches to signs, that cause CAV sign misinterpretation, resulting in potential safety issues. Humans can see and potentially defend against these attacks. But humans can not detect what they can not observe. We have developed the first physical-world attack against CAV traffic sign recognition systems that is invisible to humans. Utilizing Infrared Laser Reflection (ILR), we implement an attack that affects CAV cameras, but humans can not perceive. In this work, we formulate the threat model and requirements for an ILR-based sign perception attack. Next, we evaluate attack effectiveness against popular, CNN-based traffic sign recognition systems. We demonstrate a 100% success rate against stop and speed limit signs in our laboratory evaluation. Finally, we discuss the next steps in our research.
CANtropy: Time Series Feature Extraction-Based Intrusion Detection Systems for Controller Area Networks
Md Hasan Shahriar, Wenjing Lou, Y. Thomas Hou (Virginia Polytechnic Institute and State University)
A controller area network (CAN) connects dozens of electronic control units (ECUs), ensuring reliable and efficient data transmission. Because of the lack of security features of CAN protocol, in-vehicle networks are susceptible to a wide spectrum of threats, from simple injections at high frequencies to sophisticated masquerade attacks that target individual sensor values (signals). Hence, advanced analysis of the multidimensional time-series data is needed to learn the complex patterns of individual signals and their mutual dependencies. Although deep learning (DL)-based intrusion detection systems (IDS) have shown potential in such domain, they tend to suffer from poor generalization as they need optimization at every component. To detect such advanced CAN attacks, we propose CANtropy, a manual feature engineering-based lightweight CAN IDS. For each signal, CANtropy explores a comprehensive set of features from both temporal and statistical domains and selects only the effective subset of features in the detection pipeline to ensure scalability. Later, CANtropy uses a lightweight unsupervised anomaly detection model based on principal component analysis, to learn the mutual dependencies of the features and detect abnormal patterns in the sequence of CAN messages. The evaluation results on the advanced SynCAN dataset show that CANtropy provides a comprehensive defense against diverse types of cyberattacks with an average AUROC score of 0.992, and outperforms the existing DL-based baselines.
WIP: AMICA: Attention-based Multi-Identifier model for asynchronous intrusion detection on Controller Area networks
Natasha Alkhatib (Télécom Paris), Lina Achaji (INRIA), Maria Mushtaq (Télécom Paris), Hadi Ghauch (Télécom Paris), Jean-Luc Danger (Télécom Paris)
The adoption of external connectivity on modern vehicles and the increasing integration of complex automotive software paved the way for novel attack scenarios exploiting the vulnerabilities of in-vehicle protocols. The Controller Area Network (CAN) bus, a widely used communication network in vehicles between electronic control units (ECUs), therefore requires urgent monitoring. Predicting sophisticated intrusions that affect interdependencies between several CAN signals transmitted by distinct IDs requires modeling two key dimensions: 1) time dimension, where we model the temporal relationships between signals carried by each ID separately 2) interaction dimension where we model the interaction between IDs, i.e., how the state of each CAN ID affects the others. In this work, we propose a novel deep learning-based multi-agent intrusion detection system, AMICA, that uses an attention-based self-supervised learning technique to detect stealthy in-vehicle intrusions, i.e., those that that not only disturb normal timing or ID distributions but also carried data values by multiple IDs, along with others. The proposed model is evaluated on the benchmark dataset SynCAN. Our source code is available at: https://github.com/linaashaji/AMICA
Sampath Rajapaksha (Robert Gordon University), Harsha Kalutarage (Robert Gordon University), M.Omar Al-Kadri (Birmingham City University), Andrei Petrovski (Robert Gordon University), Garikayi Madzudzo (Horiba Mira Ltd)
Modern automobiles are equipped with a large number of electronic control units (ECUs) to provide safe, driver assistance and comfortable service. The controller area network (CAN) provides real-time data transmission between ECUs with adequate reliability for in-vehicle communication. However, the lack of security measures such as authentication and encryption makes the CAN bus vulnerable to cyberattacks, which affect the safety of passengers and the surrounding environment. Intrusion Detection Systems (IDS) based on one-class classification have been proposed to detect CAN bus intrusions. However, these IDSs require large amounts of benign data with different driving activities for training, which is challenging given the variety of such activities. This paper presents CAN-ODTL, a novel on-device transfer learning-based technique to retrain the IDS using streaming CAN data on a resource-constrained Raspberry Pi device to improve the IDS. Optimized data pre-processing and model quantization minimize the CPU and RAM usage of the Raspberry Pi by making CAN-ODTL suitable to deploy in the CAN bus as an additional ECU to detect in-vehicle cyber attacks. Float 16 quantization improves the Tensorflow model with 78% of memory and 83% of detection latency reduction. Evaluation on a real public dataset over a range of seven attacks, including more sophisticated masquerade attacks, shows that CAN-ODTL outperforms the pre-trained and baseline models with over 99% detection rate for realistic attacks. Experiments on Raspberry Pi demonstrate that CAN-ODTL can detect a wide variety of attacks with near real-time detection latency of 125ms.
Drone swarms are becoming increasingly prevalent in important missions, including military operations, rescue tasks, environmental monitoring, and disaster recovery. Member drones coordinate with each other to efficiently and effectively accomplish a given mission. To automatically coordinate a swarm, member drones exchange critical messages (e.g., their positions, locations of identified obstacles, and detected search targets) about their observed environment and missions over wireless communication channels. Therefore, swarms need a pairing system to establish secure communication channels that protect the confidentiality and integrity of the messages. However, swarm properties and the open physical environment in which they operate bring unique challenges in establishing cryptographic keys between drones.
In this paper, we first outline an adversarial model and the ideal design requirements for secure pairing in drone swarms. We then survey existing human-in-the-loop-based, context-based, and public key cryptography (PKC) based pairing methods to explore their feasibility in drone swarms. Our exploration, unfortunately, shows that existing techniques fail to fully meet the unique requirements of drone swarms. Thus, we propose research directions that can meet these requirements for secure, energy-efficient, and scalable swarm pairing systems.
Jack Sturgess, Sebastian Köhler, Simon Birnbach, Ivan Martinovic (University of Oxford)
Electric vehicle charging sessions can be authorised in different ways, ranging from smartphone applications to smart cards with unique identifiers that link the electric vehicle to the charging station. However, these methods do not provide strong authentication guarantees. In this paper, we propose a novel second factor authentication scheme to tackle this problem. We show that by using inertial sensor data collected from IMU sensors either embedded in the handle of the charging cable or on a separate smartwatch, users can be authenticated implicitly by behavioural biometrics as they unhook the cable from the charging station and plug it into their car at the start of a charging session. To validate the system, we conducted a user study (n=20) to collect data and we developed a suite of authentication models for which we achieve EERs of 0.06.
Evan Allen (Virginia Tech), Zeb Bowden (Virginia Tech Transportation Institute), Randy Marchany (Virginia Tech), J. Scot Ransbottom (Virginia Tech)
Modern vehicles are increasingly connected systems that expose a wide variety of security risks to their users. Message authentication prevents entire classes of these attacks, such as message spoofing and electronic control unit impersonation, but current in-vehicle networks do not include message authentication features. Latency and throughput requirements for vehicle traffic can be very stringent (100 Mbps in cases), making it difficult to implement message authentication with cryptography due to the overheads required. This work investigates the feasibility of implementing cryptography-based message authentication in Automotive Ethernet networks that is fast enough to comply with these performance requirements. We find that it is infeasible to include Message Authentication Codes in all traffic without costly hardware accelerators and propose an alternate approach for future research to minimize the cost of authenticated traffic.
(1) Jonathan Petit, Secure ML Performance Benchmark (Qualcomm)
(2) David Balenson, The Road to Future Automotive Research Datasets: PIVOT Project and Community Workshop (USC Information Sciences Institute)
(3) Jeremy Daily, CyberX Challenge Events (Colorado State University)
(4) Mert D. Pesé, DETROIT: Data Collection, Translation and Sharing for Rapid Vehicular App Development (Clemson University)
(5) Ning Zhang, Timing Security in Cyber-physical Systems (Washington University in St. Louis )
(6) Craig Rodine, Large-scale simulation of EV Charging PKI (Sandia National Laboratories)
Michael Westra (In-Vehicle Cyber Security Technical Manager, Ford)
Automotive cybersecurity research has tended to focus on existing vehicle technology including areas like CAN, intrusion detection, or connected IT systems. This talk will cover technology areas recently in vehicles or likely to appear soon. This will focus where security work could prove fruitful including Ethernet, Software-defined Vehicle, Electrification, and monitoring. If time permits, a brief discussion of recent regulatory changes impacting automotive may be covered.
Michael Westra is the Connected Vehicle Cyber Security Technical Manager at Ford Motor Company focused on Cyber Security for connectivity into vehicles and connected technology including embedded modems, Infotainment, cloud, autonomous, and mobile systems. Mike has a MS from University of Michigan in Software Engineering and a BS from Calvin University in Computer Science. Mike has over 25 years at Ford, with roles including leading Product Cyber, software architect for SYNC, leading a software team developing supercomputer modeling applications, and various software development roles. Mike has over 25 patents issued or in various stages of filing.
Nicolas Quero (Expleo France), Aymen Boudguiga (CEA LIST), Renaud Sirdey (CEA LIST), Nadir Karam (Expleo France)
Platooning is an upcoming technology which aims at improving transportation by allowing a leading human-driven vehicle to automatically guide multiple trucks to their respective destinations, saving driver time, improving road efficiency and reducing gas consumption. However, efficient linkage of trucks to platoons requires the centralization and processing of business-critical data which truck operators are not willing to disclose. In order to address these issues, we investigate how homomorphic encryption can be used at the core of a protocol for privately linking a vehicle to a nearby platoon without disclosing its location and destination. Furthermore, we provide experimental results illustrating that such protocols achieve acceptable performances and latencies at practical platoon database scales (serving around 500 simultaneous clients on a single platooning server processor core with sub second latency over databases of up to ≈60000 platoons scattered among over 250 destinations).
Vehicle-to-everything (V2X) communication is essential to redefining transportation by providing real-time, highly reliable, and actionable information flows to enable safety, mobility and environmental applications. V2X communications and its solutions enable the exchange of information (e.g., Basic Safety Messages) between vehicles, and between vehicle and network infrastructure. To ensure data quality, and hence proper action, V2X data must be authenticated and correct. In this paper, we propose an extensive attack platform, called VASP, which contains 68 BSM attacks. This platform is used to enhance V2X threat assessment, design relevant detectors, guide standardization and prioritization for deployment. The objective is to provide to the security community the tool to help build a more robust V2X system.
A common vision for large-scale autonomous vehicle deployment is in a ride-hailing context. While this promises tremendous societal benefits, large-scale deployment can also exacerbate the impact of potential vulnerabilities of autonomous vehicle technologies. One particularly concerning vulnerability demonstrated in recent security research involves GPS spoofing, whereby a malicious party can introduce significant error into the perceived location of the vehicle. However, such attack focus on a single target vehicle. Our goal is to understand the systemic impact of a limited number of carefully placed spoofing devices on the quality of the ride hailing service that employs a large number of autonomous vehicles. We consider two variants of this problem: 1) a static variant, in which the spoofing device locations and their configuration are fixed, and 2) a dynamic variant, where both the spoofing devices and their configuration can change over time. In addition, we consider two possible attack objectives: 1) to maximize overall travel delay, and 2) to minimize the number of successfully completed requests (dropping off passengers at the wrong destinations). First, we show that the problem is NP-hard even in the static case. Next, we present an integer linear programming approach for solving the static variant of the problem, as well as a novel deep reinforcement learning approach for the dynamic variant. Our experiments on a real traffic network demonstrate that the proposed attacks on autonomous fleets are highly successful, and even a few spoofing devices can significantly degrade the efficacy of an autonomous ride-hailing fleet.
Wei Sun, Kannan Srinivsan (The Ohio State University)
Being followed by other vehicles during driving is scary and causes privacy leakage (e.g., location), which can make our blood run cold and even make run moves. Moreover, deliberately following the other vehicles may cause significant traffic accidents. The following vehicle needs to maintain an appropriate separation from the following vehicle without getting lost and uncovered. To put the driver’s privacy and safety first, it is essential to discriminate between stalking vehicles (i.e., following abnormal vehicles) and normal following vehicles. However, there are no infrastructure-free and ubiquitous in-vehicle systems that can achieve abnormal following vehicle detection while driving.
To this end, we propose P2D2, a Privacy-Preserving Defensive Driving system that can detect the abnormal following vehicles through the sensor fusion. Specifically, we will use the camera to extract each following vehicle’s following time, and use the IMU sensors (e.g., Gyroscope ) to extract our vehicle’s critical driving behavior (e.g., making a left or right turn). We harness the space diversity of IMU sensing data to remove the artifacts of road surface conditions (e.g., bumps on the road surface) on critical driving behavior (CDB) detection. Then, we leverage the machine learning-based anomaly detection algorithm to detect the abnormal following vehicles based on the following vehicle’s following time and our vehicle’s critical driving behavior within the following time. Our experimental results show the F-1 score of 97.45% for the abnormal following vehicle detection in different driving scenarios during our daily traffic commute.
Ankit Gangwal (IIIT Hyderabad), Aakash Jain (IIIT Hyderabad) and Mauro Conti (University of Padua)
Electric vehicles (EVs) represent the long-term green substitute for traditional fuel-based vehicles. To encourage EV adoption, the trust of the end-users must be assured.
In this work, we focus on a recently emerging privacy threat of profiling and identifying EVs via the analog electrical data exchanged during the EV charging process. The core focus of our work is to investigate the feasibility of such a threat at scale. To this end, we first propose an improved EV profiling approach that outperforms the state-of-the-art EV profiling techniques. Next, we exhaustively evaluate the performance of our improved approach to profile EVs in real-world settings. In our evaluations, we conduct a series of experiments including 25032 charging sessions from 530 real EVs, sub-sampled datasets with different data distributions, etc. Our results show that even with our improved approach, profiling and individually identifying the growing number of EVs appear extremely difficult in practice; at least with the analog charging data utilized throughout the literature. We believe that our findings from this work will further foster the trust of potential users in the EV ecosystem, and consequently, encourage EV adoption
Katherine S. Zhang (Purdue University), Claire Chen (Pennsylvania State University), Aiping Xiong (Pennsylvania State University)
Artificial intelligence (AI) systems in autonomous driving are vulnerable to a number of attacks, particularly the physical-world attacks that tamper with physical objects in the driving environment to cause AI errors. When AI systems fail or are about to fail, human drivers are required to take over vehicle control. To understand such human and AI collaboration, in this work, we examine 1) whether human drivers can detect these attacks, 2) how they project the consequent autonomous driving, 3) and what information they expect for safely taking over the vehicle control. We conducted an online survey on Prolific. Participants (N = 100) viewed benign and adversarial images of two physical-world attacks. We also presented videos of simulated driving for both attacks. Our results show that participants did not seem to be aware of the attacks. They overestimated the AI’s ability to detect the object in the dirty-road attack than in the stop-sign attack. Such overestimation was also evident when participants predicted AI’s ability in autonomous driving. We also found that participants expected different information (e.g., warnings and AI explanations) for safely taking over the control of autonomous driving.
Dongyao Chen (Shanghai Jiao Tong University), Mert D. Pesé (Clemson University), Kang G. Shin (University of Michigan, Ann Arbor)
Driving apps, such as navigation, fuel-price, and road services, have been deployed and used widely. The car-related nature of these services may motivate them to infer the type of their users’ vehicles. We first apply systematic analytics on real-world apps to show that the vehicle-type — seemingly unharmful — information may have serious privacy implications. Next, we demonstrate that attackers can harvest the features of these mobile apps to infer the car-type information in a stealthy way. Specifically, we explore the use of zero-permission mobile motion sensors to extract spectral features for differentiating the engines and body types of vehicles. Based on our experimental results of 17 different cars, we have achieved 82+% and 85+% overall accuracy in identifying three major engine types and four popular body types, respectively.
Meisam Mohammady (Iowa State University), Reza Arablouei (Data61, CSIRO)
We estimate vehicular traffic states from multi-modal data collected by single-loop detectors while preserving the privacy of the individual vehicles contributing to the data. To this end, we propose a novel hybrid differential privacy (DP) approach that utilizes minimal randomization to preserve privacy by taking advantage of the relevant traffic state dynamics and the concept of DP sensitivity. Through theoretical analysis and experiments with real-world data, we show that the proposed approach significantly outperforms the related baseline non-private and private approaches in terms of accuracy and privacy preservation.
Noah T. Curran (University of Michigan), Kang G. Shin (University of Michigan), William Hass (Lear Corporation), Lars Wolleschensky (Lear Corporation), Rekha Singoria (Lear Corporation), Isaac Snellgrove (Lear Corporation), Ran Tao (Lear Corporation)
On urban roadways, “dooring” remains a serious problem to the safety of pedestrians, cyclists, and other vulnerable road users (VRUs). Existing solutions that address this concern remain inadequate, as they either place unreasonable expectations on the pedestrians or rely on prohibitively expensive additions to the vehicle’s sensing capabilities. Consequently, typical consumer vehicles are not yet equipped with such a technology, and practical dooring prevention still remains a safety concern.
To address this problem, we propose a driver safety system for dooring prevention called S-Door that uses existing resources available in every modern vehicle: Bluetooth Low-Energy (BLE). Since a modern vehicle is distributively equipped with multiple BLE transceivers, we leverage each transceiver to observe BLE advertising data (AD) packets that consumers’ smart devices passively transmit. From these AD packets, we extract information that we can use to localize the VRU device without pairing with the device. With this information, we propose two methods for localization based on BLE versions ≤5.0 and ≥5.1, respectively. Our solutions are capable of alerting the driver of all instances of an oncoming VRU. Due to S-Door’s use of existing vehicle BLE hardware, we may extend this application to modern vehicles through a firmware update—no physical modification is necessary.
Jaewan Seo, Jiwon Kwak, Seungjoo Kim (Korea University)
Through wireless networks, the number of cyberattacks on automotive systems is increasing. To respond to cyberattacks on automotive systems, the United Nations Economic Commission for Europe (UNECE) has enacted the UN Regulation series. Among them, UN R156 specifies the requirements that are necessary for the design and implementation of a software update management system (SUMS). However, the requirements of UN R156 are too abstract to develop the overall systems of SUMS. Therefore, we conducted threat modeling to obtain more specific security requirements than those specified in the UN R156. Based on the threat modeling, we proposed a secure SUMS architecture that meets specific security requirements. Finally, we formally verified whether our SUMS architecture logically meets the security requirements by Event-B.
Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)
The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.
Mahdi Akil (Karlstad University), Leonardo Martucci (Karlstad University), Jaap-Henk Hoepman (Radboud University)
In vehicular ad hoc networks (VANETs), vehicles exchange messages to improve traffic and passengers’ safety. In VANETs, (passive) adversaries can track vehicles (and their drivers) by analyzing the data exchanged in the network. The use of privacy-enhancing technologies can prevent vehicle tracking but solutions so far proposed either require an intermittent connection to a fixed infrastructure or allow vehicles to generate concurrent pseudonyms which could lead to identity-based (Sybil) attacks. In this paper, we propose an anonymous authentication scheme that does not require a connection to a fixed infrastructure during operation and is not vulnerable to Sybil attacks. Our scheme is built on attribute-based credentials and short lived pseudonyms. In it, vehicles interact with a central authority only once, for registering themselves, and then generate their own pseudonyms without interacting with other devices, or relying on a central authority or a trusted third party. The pseudonyms are periodically refreshed, following system wide epochs.
Autonomous vehicles must operate in a complex environment with various social norms and expectations. While most of the work on securing autonomous vehicles has focused on safety, we argue that we also need to monitor for deviations from various societal “common sense” rules to identify attacks against autonomous systems. In this paper, we provide a first approach to encoding and understanding these common-sense driving behaviors by semi-automatically extracting rules from driving manuals. We encode our driving rules in a formal specification and make our rules available online for other researchers.
Jun Ying (Purdue University), Yiheng Feng (Purdue University), Qi Alfred Chen (University of California, Irvine), Z. Morley Mao (University of Michigan)
Intersection movement assist (IMA) is a connected vehicle (CV) application to improve vehicle safety. GPS spoofing attack is one major threat to the IMA application since inaccurate localization results may generate fake warnings that increase rear-end crashes, or cancel real warnings that may lead to angle or swipe crashes. In this work, we first develop a GPS spoofing attack model to trigger the IMA warning of entry vehicles at a roundabout driving scenario. The attack model can generate realistic trajectories while achieving the attack goal. To defend against such attacks, we further design a one-class classifier to distinguish the normal vehicle trajectories from the trajectories under attack. The proposed model is validated with a real-world data set collected from Ann Arbor, Michigan. Results show that although the attack model triggers the IMA warning in a short time (i.e., in a few seconds), the detection model can still identify the abnormal trajectories before the attack succeeds with low false positive and false negative rates.
Zachary Depp, Halit Bugra Tulay, C. Emre Koksal (The Ohio State University)
The traditional vehicular roll-jam attack is an effective means to gain access to the target vehicle by jamming and recording key fob inputs from a victim. However, it requires specific knowledge of the attack surface, and delicate tuning of software-defined radio parameters. We have developed an enhanced version of the roll-jam attack that uses a known noise signal for jamming, in contrast to the additive white Gaussian noise that is typically used in the attack. Using a known noise signal allows for less strict tuning of the software-defined radios used in the attack, and allows for digital noise removal of the recorded input to enhance the replay attack.
Rik Chatterjee, Subhojeet Mukherjee, Jeremy Daily (Colorado State University)
Modern vehicles are equipped with embedded computers that utilize standard protocols for internal communication. The SAE J1939 protocols running on top of the Controller Area Network (CAN) protocol is the primary choice of internal communication for embedded computers in medium and heavy-duty vehicles. This paper presents five different cases in which potential shortcomings of the SAE J1939 standards are exploited to launch attacks on in-vehicle computers that constitute SAE J1939 networks.
In the first two of these scenarios, we validate the previously proposed attack hypothesis on more comprehensive testing setups. In the later three of these scenarios, we present newer attack vectors that can be executed on bench test setups and deployed SAE J1939 networks.
For the purpose of demonstration, we use bench-level test systems with real electronic control units connected to a CAN bus. Additional testing was conducted on a 2014 Kenworth T270 Class 6 truck under both stationary and driving conditions. Test results show how protocol attacks can target specific ECUs. These attacks should be considered by engineers and programmers implementing the J1939 protocol stack in their communications subsystem.
Takami Sato (University of California, Irvine), Yuki Hayakawa (Keio University), Ryo Suzuki (Keio University), Yohsuke Shiiki (Keio University), Kentaro Yoshioka (Keio University), Qi Alfred Chen (University of California, Irvine)
LiDAR (Light Detection And Ranging) is an indispensable sensor for precise long- and wide-range 3D sensing, which directly benefited the recent rapid deployment of autonomous driving (AD). Meanwhile, such a safety-critical application strongly motivates its security research. A recent line of research demonstrates that one can manipulate the LiDAR point cloud and fool object detection by firing malicious lasers against LiDAR. However, these efforts evaluate only a specific LiDAR (VLP-16) and do not consider the state-of-the-art defense mechanisms in the recent LiDARs, so-called next-generation LiDARs. In this WIP work, we report our recent progress in the security analysis of the next-generation LiDARs. We identify a new type of LiDAR spoofing attack applicable to a much more general and recent set of LiDARs. We find that our attack can remove >72% of points in a 10×10 m2 area and can remove real vehicles in the physical world. We also discuss our future plans.
Evaluations of Cyberattacks on Cooperative Control of Connected and Autonomous Vehicles at Bottleneck Points
H M Sabbir Ahmad (Boston University), Ehsan Sabouni (Boston University), Wei Xiao (Massachusetts Institute of Technology), Christos G. Cassandras (Boston University), Wenchao Li (Boston University)
In this paper we analyze the effect of cyberattacks on cooperative control of connected and autonomous vehicles (CAVs) at traffic bottleneck points. We focus on three types of such bottleneck points including merging roadways, intersections and roundabouts. The coordination amongst CAVs in the network is achieved in a decentralized manner whereby each CAV formulates its own optimal control problem and solves it onboard in real time. A roadside unit is introduced to act as the coordinator that communicates and exchanges relevant data with the CAVs through wireless V2X communication. We show that this CAV setup is vulnerable to various cyberattacks such as Sybil attack, jamming attack and false data injection attack. Results from our simulation experiments call attention to the extent to which such attacks may jeopardize the coordination performance and the safety of the CAVs.
Chen Ma (Xi'an Jiaotong University), Ningfei Wang (University of California, Irvine), Qi Alfred Chen (University of California, Irvine), Chao Shen (Xi'an Jiaotong University)
Recently, adversarial examples against object detection have been widely studied. However, it is difficult for these attacks to have an impact on visual perception in autonomous driving because the complete visual pipeline of real-world autonomous driving systems includes not only object detection but also object tracking. In this paper, we present a novel tracker hijacking attack against the multi-target tracking algorithm employed by real-world autonomous driving systems, which controls the bounding box of object detection to spoof the multiple object tracking process. Our approach exploits the detection box generation process of the anchor-based object detection algorithm and designs new optimization methods to generate adversarial patches that can successfully perform tracker hijacking attacks, causing security risks. The evaluation results show that our approach has 85% attack success rate on two detection models employed by real-world autonomous driving systems. We discuss our potential next step for this work.