Hamed Haddadpajouh (University of Guelph), Ali Dehghantanha (University of Guelph)

As the integration of Internet of Things devices continues to increase, the security challenges associated with autonomous, self-executing Internet of Things devices become increasingly critical. This research addresses the vulnerability of deep learning-based malware threat-hunting models, particularly in the context of Industrial Internet of Things environments. The study introduces an innovative adversarial machine learning attack model tailored for generating adversarial payloads at the bytecode level of executable files.

Our investigation focuses on the Malconv malware threat hunting model, employing the Fast Gradient Sign methodology as the attack model to craft adversarial instances. The proposed methodology is systematically evaluated using a comprehensive dataset sourced from instances of cloud-edge Internet of Things malware. The empirical findings reveal a significant reduction in the accuracy of the malware threat-hunting model, plummeting from an initial 99% to 82%. Moreover, our proposed approach sheds light on the effectiveness of adversarial attacks leveraging code repositories, showcasing their ability to evade AI-powered malware threat-hunting mechanisms.

This work not only offers a practical solution for bolstering deep learning-based malware threat-hunting models in Internet of Things environments but also underscores the pivotal role of code repositories as a potential attack vector. The outcomes of this investigation emphasize the imperative need to recognize code repositories as a distinct attack surface within the landscape of malware threat-hunting models deployed in the Internet of Things environments.

View More Papers

Flow Correlation Attacks on Tor Onion Service Sessions with...

Daniela Lopes (INESC-ID / IST, Universidade de Lisboa), Jin-Dong Dong (Carnegie Mellon University), Pedro Medeiros (INESC-ID / IST, Universidade de Lisboa), Daniel Castro (INESC-ID / IST, Universidade de Lisboa), Diogo Barradas (University of Waterloo), Bernardo Portela (INESC TEC / Universidade do Porto), João Vinagre (INESC TEC / Universidade do Porto), Bernardo Ferreira (LASIGE, Faculdade de…

Read More

Experimental Analyses of the Physical Surveillance Risks in Client-Side...

Ashish Hooda (University of Wisconsin-Madison), Andrey Labunets (UC San Diego), Tadayoshi Kohno (University of Washington), Earlence Fernandes (UC San Diego)

Read More

Strengthening Privacy in Robust Federated Learning through Secure Aggregation

Tianyue Chu, Devriş İşler (IMDEA Networks Institute & Universidad Carlos III de Madrid), Nikolaos Laoutaris (IMDEA Networks Institute)

Read More

A Duty to Forget, a Right to be Assured?...

Hongsheng Hu (CSIRO's Data61), Shuo Wang (CSIRO's Data61), Jiamin Chang (University of New South Wales), Haonan Zhong (University of New South Wales), Ruoxi Sun (CSIRO's Data61), Shuang Hao (University of Texas at Dallas), Haojin Zhu (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61)

Read More