Pablo Moriano (Oak Ridge National Laboratory), Robert A. Bridges (Oak Ridge National Laboratory) and Michael D. Iannacone (Oak Ridge National Laboratory)

Vehicular Controller Area Networks (CANs) are susceptible to cyber attacks of different levels of sophistication. Fabrication attacks are the easiest to administer—an adversary simply sends (extra) frames on a CAN—but also the easiest to detect because they disrupt frame frequency. To overcome time-based detection methods, adversaries must administer masquerade attacks by sending frames in lieu of (and therefore at the expected time of) benign frames but with malicious payloads. Research efforts have proven that CAN attacks, and masquerade attacks in particular, can affect vehicle functionality. Examples include causing unintended acceleration, deactivation of vehicle’s brakes, as well as steering the vehicle. We hypothesize that masquerade attacks modify the nuanced correlations of CAN signal time series and how they cluster together. Therefore, changes in cluster assignments should indicate anomalous behavior. We confirm this hypothesis by leveraging our previously developed capability for reverse engineering CAN signals (i.e., CAN-D [Controller Area Network Decoder]) and focus on advancing the state of the art for detecting masquerade attacks by analyzing time series extracted from raw CAN frames. Specifically, we demonstrate that masquerade attacks can be detected by computing time series clustering similarity using hierarchical clustering on the vehicle’s CAN signals (time series) and comparing the clustering similarity across CAN captures with and without attacks. We test our approach in a previously collected CAN dataset with masquerade attacks (i.e., the ROAD dataset) and develop a forensic tool as a proof of concept to demonstrate the potential of the proposed approach for detecting CAN masquerade attacks.

View More Papers

(Short) Fooling Perception via Location: A Case of Region-of-Interest...

Kanglan Tang, Junjie Shen, and Qi Alfred Chen (UC Irvine)

Read More

Demo #7: Automated Tracking System For LiDAR Spoofing Attacks...

Yulong Cao, Jiaxiang Ma, Kevin Fu (University of Michigan), Sara Rampazzi (University of Florida), and Z. Morley Mao (University of Michigan) Best Demo Award Runner-up ($200 cash prize)!

Read More

Effects of Knowledge and Experience on Privacy Decision-Making in...

Zekun Cai (Penn State University), Aiping Xiong (Penn State University)

Read More

HARPO: Learning to Subvert Online Behavioral Advertising

Jiang Zhang (University of Southern California), Konstantinos Psounis (University of Southern California), Muhammad Haroon (University of California, Davis), Zubair Shafiq (University of California, Davis)

Read More