Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Machine learning (ML) has established itself as a cornerstone for various critical applications ranging from autonomous driving to authentication systems. However, with this increasing adoption rate of machine learning models, multiple attacks have emerged. One class of such attacks is training time attack, whereby an adversary executes their attack before or during the machine learning model training. In this work, we propose a new training time attack against computer vision based machine learning models, namely model hijacking attack. The adversary aims to hijack a target model to execute a different task than its original one without the model owner noticing. Model hijacking can cause accountability and security risks since a hijacked model owner can be framed for having their model offering illegal or unethical services. Model hijacking attacks are launched in the same way as existing data poisoning attacks. However, one requirement of the model hijacking attack is to be stealthy, i.e., the data samples used to hijack the target model should look similar to the model's original training dataset. To this end, we propose two different model hijacking attacks, namely Chameleon and Adverse Chameleon, based on a novel encoder-decoder style ML model, namely the Camouflager. Our evaluation shows that both of our model hijacking attacks achieve a high attack success rate, with a negligible drop in model utility.

View More Papers

hbACSS: How to Robustly Share Many Secrets

Thomas Yurek (University of Illinois at Urbana-Champaign), Licheng Luo (University of Illinois at Urbana-Champaign), Jaiden Fairoze (University of California, Berkeley), Aniket Kate (Purdue University), Andrew Miller (University of Illinois at Urbana-Champaign)

Read More

Explainable AI in Cybersecurity Operations: Lessons Learned from xAI...

Megan Nyre-Yu (Sandia National Laboratories), Elizabeth S. Morris (Sandia National Laboratories), Blake Moss (Sandia National Laboratories), Charles Smutz (Sandia National Laboratories), Michael R. Smith (Sandia National Laboratories)

Read More

Preventing Kernel Hacks with HAKCs

Derrick McKee (Purdue University), Yianni Giannaris (MIT CSAIL), Carolina Ortega (MIT CSAIL), Howard Shrobe (MIT CSAIL), Mathias Payer (EPFL), Hamed Okhravi (MIT Lincoln Laboratory), Nathan Burow (MIT Lincoln Laboratory)

Read More

Uncovering Cross-Context Inconsistent Access Control Enforcement in Android

Hao Zhou (The Hong Kong Polytechnic University), Haoyu Wang (Beijing University of Posts and Telecommunications), Xiapu Luo (The Hong Kong Polytechnic University), Ting Chen (University of Electronic Science and Technology of China), Yajin Zhou (Zhejiang University), Ting Wang (Pennsylvania State University)

Read More