Hamed Haddadpajouh (University of Guelph), Ali Dehghantanha (University of Guelph)

As the integration of Internet of Things devices continues to increase, the security challenges associated with autonomous, self-executing Internet of Things devices become increasingly critical. This research addresses the vulnerability of deep learning-based malware threat-hunting models, particularly in the context of Industrial Internet of Things environments. The study introduces an innovative adversarial machine learning attack model tailored for generating adversarial payloads at the bytecode level of executable files.

Our investigation focuses on the Malconv malware threat hunting model, employing the Fast Gradient Sign methodology as the attack model to craft adversarial instances. The proposed methodology is systematically evaluated using a comprehensive dataset sourced from instances of cloud-edge Internet of Things malware. The empirical findings reveal a significant reduction in the accuracy of the malware threat-hunting model, plummeting from an initial 99% to 82%. Moreover, our proposed approach sheds light on the effectiveness of adversarial attacks leveraging code repositories, showcasing their ability to evade AI-powered malware threat-hunting mechanisms.

This work not only offers a practical solution for bolstering deep learning-based malware threat-hunting models in Internet of Things environments but also underscores the pivotal role of code repositories as a potential attack vector. The outcomes of this investigation emphasize the imperative need to recognize code repositories as a distinct attack surface within the landscape of malware threat-hunting models deployed in the Internet of Things environments.

View More Papers

Efficient and Timely Revocation of V2X Credentials

Gianluca Scopelliti (Ericsson & KU Leuven), Christoph Baumann (Ericsson), Fritz Alder (KU Leuven), Eddy Truyen (KU Leuven), Jan Tobias Mühlberg (Université libre de Bruxelles & KU Leuven)

Read More

Facilitating Threat Modeling by Leveraging Large Language Models

Isra Elsharef, Zhen Zeng (University of Wisconsin-Milwaukee), Zhongshu Gu (IBM Research)

Read More

Group-based Robustness: A General Framework for Customized Robustness in...

Weiran Lin (Carnegie Mellon University), Keane Lucas (Carnegie Mellon University), Neo Eyal (Tel Aviv University), Lujo Bauer (Carnegie Mellon University), Michael K. Reiter (Duke University), Mahmood Sharif (Tel Aviv University)

Read More

Low-Quality Training Data Only? A Robust Framework for Detecting...

Yuqi Qing (Tsinghua University), Qilei Yin (Zhongguancun Laboratory), Xinhao Deng (Tsinghua University), Yihao Chen (Tsinghua University), Zhuotao Liu (Tsinghua University), Kun Sun (George Mason University), Ke Xu (Tsinghua University), Jia Zhang (Tsinghua University), Qi Li (Tsinghua University)

Read More