NDSS

Automotive and Autonomous Vehicle Security (AutoSec) Workshop 2021

All times are in PST (UTC-8). Best Demo Voting (closes at 2 PM PT): Vote here

Thursday February 25
  • 7:00 am - 7:15 am
    Welcome to AutoSec 2021 and Paper Awards
  • 7:15 am - 8:20 am
    Session 1: Miscellaneous
    Chair: Ziming Zhao (University at Buffalo)
    • Hyunjae Kang, Byung Il Kwak, Young Hun Lee, Haneol Lee, Hwejae Lee, and Huy Kang Kim (Korea University)

      Abstract

      Cybersecurity competitions can promote the importance of security and discover talented researchers. We hosted the Car Hacking: Attack & Defense Challenge from September 14, 2020 to November 27, 2020, and many security companies and researchers participated. To the best of our knowledge, it is the first competition to contest both attack and detection techniques on an in-vehicle network, specifically Controller Area Network (CAN). The participants developed various injection attacks and high-performance detection algorithms based on the real vehicle environment. Rule-based and ensemble tree-based models dominated the final round. Also, time interval and data byte patterns worked as major features to detect attacks.

    • Seonghoon Jeong, Eunji Park, Kang Uk Seo, Jeong Do Yoo, and Huy Kang Kim (Korea University)

      Abstract

      MAVLink protocol is a de facto standard protocol used to communicate between unmanned vehicle and ground control system (GCS). Given the nature of the system, unmanned vehicles use MAVLink to communicate with a GCS to be monitored and controlled. Such communication continues to grow on the Internet due to its rapidly grown nature. In the past few years, the unmanned vehicle security has been one of the key research topics in the security field. However, existing research has mainly focused on the sensor- and GPS-based attack detection methods. To this end, we propose MUVIDS, a network-level intrusion detection system to protect MAVLink-enabled unmanned vehicles managed by GCS over the Internet. MUVIDS includes two Long short-term memory models that leverage a sequential MAVLink stream from a victim vehicle. The two models are designed to solve a binary classification problem (in case of labels are available) and a next MAVLink message prediction problem (in case of no label is available), respectively. The experiment was performed on a software-in-the-loop unmanned aerial vehicle (UAV) simulator and a hardware-in-the-loop UAV simulator. The experiment result confirms that MUVIDS detects false MAVLink injection attacks effectively.

    • Zhongyuan Hau, Kenneth Co, Soteris Demetriou, and Emil Lupu (Imperial College London)

      Abstract

      LiDARs play a critical role in Autonomous Vehicles’ (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points behind the target object, we effectively shift points away from the target objects’ RoIs. Our initial results using a simple random point selection strategy show that the attack is effective in degrading the performance of commonly used 3D object detection models.

    • Li Yue, Zheming Li, Tingting Yin, and Chao Zhang (Tsinghua University)

      Abstract

      Modern vehicles have many electronic control units (ECUs) connected to the Controller Area Network (CAN) bus, which have few security features in design and are vulnerable to cyber attacks. Researchers have proposed solutions like intrusion detection systems (IDS) to mitigate such threats. We presented a novel attack, CANCloak, which can deceive two ECUs with one CAN data frame, and therefore can bypass IDS detection or cause vehicle malfunction. In this attack, assuming a malicious transmitter is controlled by the adversary, one crafted CAN data frame can be transmitted to a target receiver, while other ECUs shall not receive that frame nor raise any error. We have setup a physical test environment and evaluated the effectiveness of this attack. Evaluation results showed that success rate of CANCloak reaches up to 99.7%, while the performance depends on the attack payload and sample point settings of victim receivers, independent from bus bit rate.

    8:20 am - 8:35 am
    Break + Demo Session 1
    Chair: Qi Alfred Chen (UC Irvine)
    • Yunzhe Tian, Yike Li, Yingxiao Xiang, Wenjia Niu, Endong Tong, and Jiqiang Liu (Beijing Jiaotong University)

      Abstract

      Robust reinforcement learning has been a challenging problem due to always unknown differences between real and training environment. Existing efforts approached the problem through performing random environmental perturbations in learning process. However, one can not guarantee perturbation is positive. Bad ones might bring failures to reinforcement learning. Therefore, in this paper, we propose to utilize GAN to dynamically generate progressive perturbations at each epoch and realize curricular policy learning. Demo we implemented in unmanned CarRacing game validates the effectiveness.

    • Yuzhe Ma, Jon Sharp, Ruizhe Wang, Earlence Fernandes, and Jerry Zhu (University of Wisconsin–Madison)

      Abstract

      Kalman Filter (KF) is widely used in various domains to perform sequential learning or variable estimation. In the context of autonomous vehicles, KF constitutes the core component of many Advanced Driver Assistance Systems (ADAS), such as Forward Collision Warning (FCW). It tracks the states (distance, velocity etc.) of relevant traffic objects based on sensor measurements. The tracking output of KF is often fed into downstream logic to produce alerts, which will then be used by human drivers to make driving decisions in near-collision scenarios. In this work, we demonstrate planning-based attacks on Forward Collision Warning — a machine-human hybrid system that uses KF. Based on our work published at the AAAI2021 conference, we use an MPC-based algorithm and show how an attacker can sequentially perturb vision measurements to change the FCW alert signals at desired points in time. We simulate our attack on CARLA using standard test protocols from the National Highway Traffic Safety Administration.

    8:35 am - 9:30 am
    Session 2: In-Vehicle Network Security
    Chair: Qi Alfred Chen (UC Irvine)
    • Gedare Bloom (University of Colorado Colorado Springs)

      Abstract

      The controller area network (CAN) is a high-value asset to defend and attack in automobiles. The bus-off attack exploits CAN’s fault confinement to force a victim electronic control unit (ECU) into the bus-off state, which prevents it from using the bus. Although pernicious, the bus-off attack has two distinct phases that are observable on the bus and allow the attack to be detected and prevented. In this paper we present WeepingCAN, a refinement of the bus-off attack that is stealthy and can escape detection. We evaluate WeepingCAN experimentally using realistic CAN benchmarks and find it succeeds in over 75% of attempts without exhibiting the detectable features of the original attack. We demonstrate WeepingCAN on a real vehicle.

    • Jeremy Daily, David Nnaji, and Ben Ettlinger (Colorado State University)

      Abstract

      Controller Area Network (CAN) implementations inherently trust all valid messages on the network. While this feature makes for easy replacement and repair of electronic control units (ECUs), this trust poses some cybersecurity challenges, like making it easy to spoof messages or alter them with a middleperson attack. With an SAE J1939 based network, the meaning of the network messages are often published, which reduces the amount of work needed to reverse engineer the protocol. Furthermore, J1939 is often used on high-value and high-risk cyber-physical systems, like trucks, buses, generator systems, construction, agriculture, forestry, and marine and military systems. Therefore, improving the cybersecurity posture of SAE J1939 networks is crucial for protecting critical infrastructure.

      The approach outlined in this paper for an intrusion detection system (IDS) uses so-called CAN Conditioners at or in each of the vehicle ECUSs that communicate with the Secure Gateway near the vehicle’s diagnostic port. Each of the CAN Conditioners and the Secure Gateway includes an allowlist and blocklist procedure to prevent a variety of unauthorized network attacks. In addition, a cipher-based message authentication code (CMAC) is calculated by each node and transmitted across the network using the J1939 Data Security Message parameter group number (PGN). This CMAC message acts as a heartbeat indicator for the Secure Gateway to verify healthy node behavior and unaltered messaging.

      Reference prototype hardware and software are described and results from a test implementation on a Class 6 truck with 6.7L diesel engine and an automated transmission are also described. The provisioning process sets up hardware security modules to be able to exchange secrets over the CAN bus using the elliptic-curve Diffie-Hellman protocol (ECDH). Once secrets are exchanged, ephemeral session keys are shared with the Secure Gateway, which keeps track of the CMACs from each CAN Conditioner. If a CMAC fails to match, the Secure Gateway informs the network using the J1939 Diagnostic Message #1 and a message using the J1939 defined Impostor PG Alert parameter group. Results show the IDS can detect alteration of a message or an impersonated message.

    • Deborah Blevins (University of Kentucky), Pablo Moriano, Robert Bridges, Miki Verma, Michael Iannacone, and Samuel Hollifield (Oak Ridge National Laboratory)

      Abstract

      Modern vehicles are complex cyber-physical systems made of hundreds of electronic control units (ECUs) that communicate over controller area networks (CANs). This inherited complexity has expanded the CAN attack surface which is vulnerable to message injection attacks. These injections change the overall timing characteristics of messages on the bus, and thus, to detect these malicious messages, time-based intrusion detection systems (IDSs) have been proposed. However, time-based IDSs are usually trained and tested on low-fidelity datasets with unrealistic, labeled attacks. This makes difficult the task of evaluating, comparing, and validating IDSs. Here we detail and benchmark four time-based IDSs against the newly published ROAD dataset, the first open CAN IDS dataset with real (non-simulated) stealthy attacks with physically verified effects. We found that methods that perform hypothesis testing by explicitly estimating message timing distributions have lower performance than methods that seek anomalies in a distribution related statistic. In particular, these “distribution-agnostic” based methods outperform “distribution-based” methods by at least 55% in area under the precision-recall curve (AUC-PR). Our results expand the body of knowledge of CAN time-based IDSs by providing details of these methods and reporting their results when tested on datasets with real advanced attacks. Finally, we develop an after-market plug-in detector using lightweight hardware, which can be used to deploy the best performing IDS method on nearly any vehicle.

    9:30 am - 9:45 am
    Break + Demo Session 2
    Chair: Ziming Zhao (University at Buffalo)
    • Ben Nassi, Raz Ben-Netanel (Ben-Gurion University of the Negev), Adi Shamir (Weizmann Institute of Science), and Yuval Elovic (Ben-Gurion University of the Negev)

      Abstract

      In this demo, we demonstrate that cryptanalysis can be used to determine whether a passing drone is used for spying, by analyzing the drone’s encrypted video channel. We also show that a spying drone can be detected when the victim is located in a house or traveling in a car, with the use of a flickering light.

    • Ben Nassi (Ben-Gurion University of the Negev), Yisroel Mirsky (Ben-Gurion University of the Negev, Georgia Tech), Dudi Nassi, Raz Ben Netanel (Ben-Gurion University of the Negev), Oleg Drokin (Independent Researcher), and Yuval Elovici (Ben-Gurion University of the Negev)

      Abstract

      In this demo, we demonstrate how attackers can remotely apply split-second phantom attacks by embedding phantom road signs into an advertisement presented on an Internet connected digital billboard which causes Tesla’s autopilot to suddenly stop the car in the middle of a road.

    9:45 am - 10:35 am
    Keynote
    Chair: Qi Alfred Chen (UC Irvine)
    • Jonathan Petit (Director Of Engineering at Qualcomm Technologies)
      Dr. Jonathan Petit is Director of Engineering at Qualcomm Technologies, Inc., where he leads research in security of connected and automated vehicles (CAV). His team works on designing security solutions, but also develops tools for automotive penetration testing and builds prototypes. His recent work on misbehavior protection for V2X has been integrated in US DOT Connected Vehicle Pilot Deployment and into OEM’s solutions. He was the first to demonstrate attacks on LIDAR and camera system for automated vehicles. His research also covers privacy of CAV, where he demonstrated real-world eavesdropping and its effect on location privacy. Dr. Petit holds a PhD degree from University Paul Sabatier, Toulouse, France.

      Abstract

      Vehicle-to-everything communication is a key component to automated driving. Therefore, V2X data must be secured in order to ensure safe and efficient operations. In this talk I will present the status of V2X security and highlight open challenges (e.g. misbehavior detection, new connected vehicle applications).

    10:35 am - 10:50 am
    Break + Demo Session 3
    Chair: Ziming Zhao (University at Buffalo)
    • Jeremy Daily, David Nnaji, and Ben Ettlinger (Colorado State University)

      Abstract

      Diagnostics and maintenance systems create frequent, legitimate, and intermittent connections to a vehicle’s communication network. These connections are typically made with a vehicle diagnostics adapter (VDA), which serves to translate vehicle network communications to a Windows based service computer running diagnostics software. With heavy vehicles, the diagnostic systems are written and maintained by the supplier of the electronic control units. This means there may be multiple different software packages needed to maintain a heavy vehicle. However, all of these software systems use an interface defined by the American Trucking Association (ATA) through their Technology and Maintenance Council (TMC) using Recommended Practice (RP) number 1210, the Windows API for vehicle diagnostics. Therefore, most diagnostics and maintenance communications on a heavy vehicles utilize a thirdparty VDA with little to no cybersecurity controls. The firmware and drivers for the VDA can be entry points for cyber attacks. In this demonstration, a vehicle diagnostics session is attacked using the VDA firmware, VDA PC driver, and a middle-person attack. A proposed secure diagnostics gateway is demonstrated to secure the diagnostics communications between the heavy vehicle network and the diagnostics application, thus defending against attacks on vulnerable VDA components. Furthermore, the maintenance operations are often trusted and an attacker gains physical access to the vehicle with the unknowing technician. Since these diagnostic systems are connected to the Internet and run Windows, the traditional security issues associated with Windows PCs are now part of the heavy vehicle.

    • Pritam Dash, Mehdi Karimibiuki, and Karthik Pattabiraman (University of British Columbia)

      Abstract
    10:50 am - 11:35 am
    Session 3: Autonomous Driving Security I: Physical-World Attacks
    Chair: Xiali (Sharon) Hei (University of Louisiana at Lafayette)
    • Takami Sato, Junjie Shen, Ningfei Wang (UC Irvine), Yunhan Jia (ByteDance), Xue Lin (Northeastern University), and Qi Alfred Chen (UC Irvine)

      Abstract

      Automated Lane Centering (ALC) systems are convenient and widely deployed today, but also highly security and safety critical. Recently, Dirty Road Patch (DRP) attack is proposed as a state-of-the-art adversarial attack against ALC systems. In this work, we report our recent progress of improving the DRP attack on attack deployability, attack stealthiness, and effectiveness on real vehicle. We also discuss future directions.

    • Abstract

      Susceptibility of neural networks to adversarial attack prompts serious safety concerns for lane detection efforts, a domain where such models have been widely applied. Recent work on adversarial road patches have successfully induced perception of lane lines with arbitrary form, presenting an avenue for rogue control of vehicle behavior. In this paper, we propose a modular lane verification system that can catch such threats before the autonomous driving system is misled while remaining agnostic to the particular lane detection model. Our experiments show that implementing the system with a simple convolutional neural network (CNN) can defend against a wide gamut of attacks on lane detection models. With a 10% impact to inference time, we can detect 96% of bounded non-adaptive attacks, 90% of bounded adaptive attacks, and 98% of patch attacks while preserving accurate identification at least 95% of true lanes, indicating that our proposed verification system is effective at mitigating lane detection security risks with minimal overhead.

    • Hengyi Liang, Ruochen Jiao (Northwestern University), Takami Sato, Junjie Shen, Qi Alfred Chen (UC Irvine), and Qi Zhu (Northwestern University)

      Abstract

      Machine learning techniques, particularly those based on deep neural networks (DNNs), are widely adopted in the development of advanced driver-assistance systems (ADAS) and autonomous vehicles. While providing significant improvement over traditional methods in average performance, the usage of DNNs also presents great challenges to system safety, especially given the uncertainty of the surrounding environment, the disturbance to system operations, and the current lack of methodologies for predicting DNN behavior. In particular, adversarial attacks to the sensing input may cause errors in systems’ perception of the environment and lead to system failure. However, existing works mainly focus on analyzing the impact of such attacks on the sensing and perception results and designing mitigation strategies accordingly. We argue that as system safety is ultimately determined by the actions it takes, it is essential to take an end-to-end approach and address adversarial attacks with the consideration of the entire ADAS or autonomous driving pipeline, from sensing and perception to planing, navigation and control. In this paper, we present our initial findings in quantitatively analyzing the impact of a type of adversarial attack (that leverages road patch) on system planning and control, and discuss some of the possible directions to systematically address such attack with an end-to-end view.

    11:35 am - 11:50 am
    Break + Demo Session 4
    Chair: Qi Alfred Chen (UC Irvine)
    • Yulong Cao, Jiaxiang Ma, Kevin Fu (University of Michigan), Sara Rampazzi (University of Florida), and Z. Morley Mao (University of Michigan)

      Abstract

      Recent studies have demonstrated that LiDAR sensors are vulnerable to spoofing attacks, in which adversaries spoof fake points to fool the car’s perception system to see nonexistent obstacles. However, these attacks are generally conducted on static or simulated scenarios. Therefore, in this demo, we perform the first LiDAR spoofing attack on moving targets. We implemented a minimal tracking system integrated with the spoofer device to perform laser-based attacks on Lidar sensors. The demo shows how it is possible to inject up to 100 fake cloud points under three different scenarios.

    • Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

      Abstract

      Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

    11:50 am - 12:15 pm
    Session 4: Autonomous Driving Security II: Sensor Attacks
    Chair: Hongxin Hu (University at Buffalo)
    • Ben Nassi, Dudi Nassi, Raz Ben Netanel and Yuval Elovici (Ben-Gurion University of the Negev)

      Abstract

      In this paper, we evaluate the robustness of Mobileye 630 PRO, the most popular off-the-shelf ADAS on the market today, to camera spoofing attacks applied using a projector. We show that Mobileye 630 issues false notifications about road signs projected in proximity to the car that the system is installed in. We assess how changes of the road signs (e.g., changes in color, shape, projection speed, diameter and ambient light) affect the outcome of an attack. We find that while Mobileye 630 PRO rejects fake projected road signs that consists of non-original shapes and objects, it accepts fake projected road signs that consists of non-original colors. We demonstrate how attackers can leverage these findings to apply a remote attack in a realistic scenario by using a drone that carries a portable projector which projects the spoofed traffic sign on a building located in proximity to a passing car equipped with Mobileye 630. Our experiments show that it is possible to fool Mobileye 630 PRO to issue false notification about a traffic sign projected from a drone.

    • Abstract

      The perception module is the key to the security of Autonomous Driving systems. It perceives the environment through sensors to help make safe and correct driving decisions on the road. The localization module is usually considered to be independent of the perception module. However, we discover that the correctness of perception output highly depends on localization due to the widely used Region-of-Interest design adopted in perception. Leveraging this insight, we propose an ROI attack and perform a case study in the traffic light detection in Autonomous Driving systems. We evaluate the ROI attack on a production-grade Autonomous Driving system, named Baidu Apollo, under end-to-end simulation environments. We found our attack is able to make the victim a red light runner or cause denial-of-service with a 100% success rate.

    12:15 pm - 12:30 pm
    Break + Demo Session 5
    Chair: Ziming Zhao (University at Buffalo)
    12:30 pm - 1:25 pm
    Session 5: Connected Vehicle Security
    Chair: Dongyao Chen (Shanghai Jiao Tong University)
    • Shihong Huang (University of Michigan, Ann Arbor), Yiheng Feng (Purdue University), Wai Wong (University of Michigan, Ann Arbor), Qi Alfred Chen (UC Irvine), Z. Morley Mao and Henry X. Liu (University of Michigan, Ann Arbor)

      Abstract

      Connected vehicle (CV) technologies enable data exchange between vehicles and transportation infrastructure. In a CV environment, traffic signal control systems receive CV trajectory data through vehicle-to-infrastructure (V2I) communications to make control decisions. Comparing with existing data collection methods (e.g., from loop-detectors), the CV trajectory data provide much richer information, and therefore have great potentials to improve the system performance by reducing total vehicle delay at signalized intersections. However, this connectivity might also bring cyber security concerns.

      In this paper, we aim to investigate the security problem of CV-based traffic signal control (CV-TSC) systems. Specifically, we focus on evaluating the impact of falsified data attacks on the system performance. A black-box attack scenario, in which the control logic of a CV-TSC system is unavailable to attackers, is considered. A two-step attack model is constructed. In the first step, the attacker tries to learn the control logic using a surrogate model. Based on the surrogate model, in the second step, the attacker launches falsified data attacks to influence the control systems to make sub-optimal control decisions. In the case study, we apply the attack model to an existing CV-TSC system (i.e., I-SIG) and find intersection delay can be significantly increased. Finally, we discuss some promising defense directions.

    • Natasa Trkulja, David Starobinski (Boston University), and Randall Berry (Northwestern University)

      Abstract

      Cellular Vehicle-to-Everything (C-V2X) has been adopted by the FCC as the technology standard for safetyrelated transportation and vehicular communications in the US. C-V2X allows vehicles to self-manage the network in absence of a cellular base-station. Since C-V2X networks convey safety-critical messages, it is crucial to assess their security posture. This work contributes a novel set of Denial-of-Service (DoS) attacks on CV2X networks. The attacks are caused by adversarial resource block selection and vary in sophistication and efficiency. In particular, we consider “oblivious” adversaries that ignore recent transmission activity on resource blocks, “smart” adversaries that do monitor activity on each resource block, and “cooperative” adversaries that work together to ensure they attack different targets. We analyze and simulate these attacks to showcase their effectiveness. Assuming a fixed number of attackers, we show that at low vehicle density, smart and cooperative attacks can significantly impact network performance, while at high vehicle density, oblivious attacks are almost as effective as the more sophisticated attacks.

    • Anas Alsoliman, Marco Levorato, and Qi Alfred Chen (UC Irvine)

      Abstract

      In autonomous vehicle systems – whether ground or aerial – vehicles and infrastructure-level units communicate among each other continually to ensure safe and efficient autonomous operations. However, different attack scenarios might arise in such environments when a device in the network cannot physically pinpoint the actual transmitter of a certain message. For example, a compromised or a malicious vehicle could send a message with a fabricated location to appear as if it is in the location of another legitimate vehicle, or fabricate multiple messages with fake identities to alter the behavior of other vehicles/infrastructure units and cause traffic congestion or accidents. In this paper, we propose a Vision-Based Two-Factor Authentication and Localization Scheme for Autonomous Vehicles. The scheme leverages the vehicles’ light sources and cameras to establish an “Optical Camera Communication (OCC)” channel providing an auxiliary channel between vehicles to visually authenticate and localize the transmitter of messages that are sent over Radio Frequency (RF) channels. Additionally, we identify possible attacks against the proposed scheme as well as mitigation strategies.

  • 1:25 pm - 1:40 pm
    Break + Demo Session 6
  • 1:40 pm - 2:15 pm
    Session 6: Electric Vehicle Security
    Chair: Ziming Zhao (University at Buffalo)
    • Andreas Unterweger, Fabian Knirsch, Clemens Brunner and Dominik Engel (Center for Secure Energy Informatics, Salzburg University of Applied Sciences, Puch bei Hallein, Austria)

      Abstract

      The increasing amount of electric vehicles and a growing electric vehicle ecosystem is becoming a highly heterogeneous environment with a large number of participants that interact and communicate. Finding a charging station, performing vehicle-to-vehicle charging or processing payments poses privacy threats to customers as their location and habits can be traced. In this paper, we present a privacy-preserving solution for grid-to-vehicle charging, vehicle-to-grid charging and vehicle to-vehicle charging, that allows for finding the right charging option in a competitive market environment and that allows for built-in payments with adjustable and limited risk for both, producers and consumers of electricity. The proposed approach builds on blockchain technology and extends a state-of-the-art protocol with payments, while still preserving the privacy of the users. The protocol is evaluated with respect to privacy, risk and scalability. It is shown that pseudonymity and location privacy (against third parties) is guaranteed throughout the protocol, even beyond a single protocol session. In addition, both, risk and scalability can be adjusted based on the used blockchain.

    • Abstract

      Over-the-air (OTA) software updates are an important feature to remotely analyze and upgrade any section of currently running software on battery-operated electric vehicles and its supply equipment. Even though a secure OTA framework can verify and validate updates before installation, the integrity of the framework itself cannot be guaranteed, and can easily introduce system and software vulnerability with potential catastrophic consequences. In this paper, we show how a popular automotive OTA secure update framework (Uptane) can be deployed entirely inside a TEE-enabled commercial off-the-shelf (COTS) embedded device to extend its security considerations and improve its resilience against both internal and external security breaches. We also present a software analysis tool that leverages SAWScript to verify our proposed solution against any functional and logical inconsistency, while validating our approach on a real COTS hardware (Raspberry Pi 3B).

  • 2:15 pm - 2:30 pm
    Closing Remarks and Demo Awards