Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Machine learning (ML) has established itself as a cornerstone for various critical applications ranging from autonomous driving to authentication systems. However, with this increasing adoption rate of machine learning models, multiple attacks have emerged. One class of such attacks is training time attack, whereby an adversary executes their attack before or during the machine learning model training. In this work, we propose a new training time attack against computer vision based machine learning models, namely model hijacking attack. The adversary aims to hijack a target model to execute a different task than its original one without the model owner noticing. Model hijacking can cause accountability and security risks since a hijacked model owner can be framed for having their model offering illegal or unethical services. Model hijacking attacks are launched in the same way as existing data poisoning attacks. However, one requirement of the model hijacking attack is to be stealthy, i.e., the data samples used to hijack the target model should look similar to the model's original training dataset. To this end, we propose two different model hijacking attacks, namely Chameleon and Adverse Chameleon, based on a novel encoder-decoder style ML model, namely the Camouflager. Our evaluation shows that both of our model hijacking attacks achieve a high attack success rate, with a negligible drop in model utility.

View More Papers

Building Embedded Systems Like It’s 1996

Ruotong Yu (Stevens Institute of Technology, University of Utah), Francesca Del Nin (University of Padua), Yuchen Zhang (Stevens Institute of Technology), Shan Huang (Stevens Institute of Technology), Pallavi Kaliyar (Norwegian University of Science and Technology), Sarah Zakto (Cyber Independent Testing Lab), Mauro Conti (University of Padua, Delft University of Technology), Georgios Portokalidis (Stevens Institute of…

Read More

MobFuzz: Adaptive Multi-objective Optimization in Gray-box Fuzzing

Gen Zhang (National University of Defense Technology), Pengfei Wang (National University of Defense Technology), Tai Yue (National University of Defense Technology), Xiangdong Kong (National University of Defense Technology), Shan Huang (National University of Defense Technology), Xu Zhou (National University of Defense Technology), Kai Lu (National University of Defense Technology)

Read More

The Truth Shall Set Thee Free: Enabling Practical Forensic...

Leonardo Babun (Florida International University), Amit Kumar Sikder (Florida International University), Abbas Acar (Florida International University), Selcuk Uluagac (Florida International University)

Read More

HARPO: Learning to Subvert Online Behavioral Advertising

Jiang Zhang (University of Southern California), Konstantinos Psounis (University of Southern California), Muhammad Haroon (University of California, Davis), Zubair Shafiq (University of California, Davis)

Read More