Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Machine learning (ML) has established itself as a cornerstone for various critical applications ranging from autonomous driving to authentication systems. However, with this increasing adoption rate of machine learning models, multiple attacks have emerged. One class of such attacks is training time attack, whereby an adversary executes their attack before or during the machine learning model training. In this work, we propose a new training time attack against computer vision based machine learning models, namely model hijacking attack. The adversary aims to hijack a target model to execute a different task than its original one without the model owner noticing. Model hijacking can cause accountability and security risks since a hijacked model owner can be framed for having their model offering illegal or unethical services. Model hijacking attacks are launched in the same way as existing data poisoning attacks. However, one requirement of the model hijacking attack is to be stealthy, i.e., the data samples used to hijack the target model should look similar to the model's original training dataset. To this end, we propose two different model hijacking attacks, namely Chameleon and Adverse Chameleon, based on a novel encoder-decoder style ML model, namely the Camouflager. Our evaluation shows that both of our model hijacking attacks achieve a high attack success rate, with a negligible drop in model utility.

View More Papers

Clarion: Anonymous Communication from Multiparty Shuffling Protocols

Saba Eskandarian (University of North Carolina at Chapel Hill), Dan Boneh (Stanford University)

Read More

MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity

Shengwei An (Purdue University), Guanhong Tao (Purdue University), Qiuling Xu (Purdue University), Yingqi Liu (Purdue University), Guangyu Shen (Purdue University); Yuan Yao (Nanjing University), Jingwei Xu (Nanjing University), Xiangyu Zhang (Purdue University)

Read More

FakeGuard: Exploring Haptic Response to Mitigate the Vulnerability in...

Aditya Singh Rathore (University at Buffalo, SUNY), Yijie Shen (Zhejiang University), Chenhan Xu (University at Buffalo, SUNY), Jacob Snyderman (University at Buffalo, SUNY), Jinsong Han (Zhejiang University), Fan Zhang (Zhejiang University), Zhengxiong Li (University of Colorado Denver), Feng Lin (Zhejiang University), Wenyao Xu (University at Buffalo, SUNY), Kui Ren (Zhejiang University)

Read More

Interpretable Federated Transformer Log Learning for Cloud Threat Forensics

Gonzalo De La Torre Parra (University of the Incarnate Word, TX, USA), Luis Selvera (Secure AI and Autonomy Lab, The University of Texas at San Antonio, TX, USA), Joseph Khoury (The Cyber Center For Security and Analytics, University of Texas at San Antonio, TX, USA), Hector Irizarry (Raytheon, USA), Elias Bou-Harb (The Cyber Center For…

Read More