Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Machine learning (ML) has established itself as a cornerstone for various critical applications ranging from autonomous driving to authentication systems. However, with this increasing adoption rate of machine learning models, multiple attacks have emerged. One class of such attacks is training time attack, whereby an adversary executes their attack before or during the machine learning model training. In this work, we propose a new training time attack against computer vision based machine learning models, namely model hijacking attack. The adversary aims to hijack a target model to execute a different task than its original one without the model owner noticing. Model hijacking can cause accountability and security risks since a hijacked model owner can be framed for having their model offering illegal or unethical services. Model hijacking attacks are launched in the same way as existing data poisoning attacks. However, one requirement of the model hijacking attack is to be stealthy, i.e., the data samples used to hijack the target model should look similar to the model's original training dataset. To this end, we propose two different model hijacking attacks, namely Chameleon and Adverse Chameleon, based on a novel encoder-decoder style ML model, namely the Camouflager. Our evaluation shows that both of our model hijacking attacks achieve a high attack success rate, with a negligible drop in model utility.

View More Papers

ATTEQ-NN: Attention-based QoE-aware Evasive Backdoor Attacks

Xueluan Gong (Wuhan University), Yanjiao Chen (Zhejiang University), Jianshuo Dong (Wuhan University), Qian Wang (Wuhan University)

Read More

What the Fork? Finding and Analyzing Malware in GitHub...

Alan Cao (New York University) and Brendan Dolan-Gavitt (New York University)

Read More

Cross-Language Attacks

Samuel Mergendahl (MIT Lincoln Laboratory), Nathan Burow (MIT Lincoln Laboratory), Hamed Okhravi (MIT Lincoln Laboratory)

Read More

Property Inference Attacks Against GANs

Junhao Zhou (Xi'an Jiaotong University), Yufei Chen (Xi'an Jiaotong University), Chao Shen (Xi'an Jiaotong University), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More