Dung Thuy Nguyen (Vanderbilt University), Ngoc N. Tran (Vanderbilt University), Taylor T. Johnson (Vanderbilt University), Kevin Leach (Vanderbilt University)

In recent years, the rise of machine learning (ML) in cybersecurity has brought new challenges, including the increasing threat of backdoor poisoning attacks on ML malware classifiers. These attacks aim to manipulate model behavior when provided with a particular input trigger. For instance, adversaries could inject malicious samples into public malware repositories, contaminating the training data and potentially misclassifying malware by the ML model. Current countermeasures predominantly focus on detecting poisoned samples by leveraging disagreements within the outputs of a diverse set of ensemble models on training data points.
However, these methods are not applicable in scenarios involving ML-as-a-Service (MLaaS) or for users who seek to purify a backdoored model post-training. Addressing this scenario, we introduce PBP, a post-training defense for malware classifiers that mitigates various types of backdoor embeddings without assuming any specific backdoor embedding mechanism. Our method exploits the influence of backdoor attacks on the activation distribution of neural networks, independent of the trigger-embedding method.
In the presence of a backdoor attack, the activation distribution of each layer is distorted into a mixture of distributions. By regulating the statistics of the batch normalization layers, we can guide a backdoored model to perform similarly to a clean one. Our method demonstrates substantial advantages over several state-of-the-art methods, as evidenced by experiments on two datasets, two types of backdoor methods, and various attack configurations. Our experiments showcase that PBP can mitigate even the SOTA backdoor attacks for malware classifiers, e.g., Jigsaw Puzzle, which was previously demonstrated to be stealthy against existing backdoor defenses. Notably, your approach requires only a small portion of the training data --- only 1% --- to purify the backdoor and reduce the attack success rate from 100% to almost 0%, a 100-fold improvement over the baseline methods. Our code is available at https://github.com/judydnguyen/pbp-backdoor-purification-official.

View More Papers

Mysticeti: Reaching the Latency Limits with Uncertified DAGs

Kushal Babel (Cornell Tech & IC3), Andrey Chursin (Mysten Labs), George Danezis (Mysten Labs & University College London (UCL)), Anastasios Kichidis (Mysten Labs), Lefteris Kokoris-Kogias (Mysten Labs & IST Austria), Arun Koshy (Mysten Labs), Alberto Sonnino (Mysten Labs & University College London (UCL)), Mingwei Tian (Mysten Labs)

Read More

Balancing Privacy and Data Utilization: A Comparative Vignette Study...

Leona Lassak (Ruhr University Bochum), Hanna Püschel (TU Dortmund University), Oliver D. Reithmaier (Leibniz University Hannover), Tobias Gostomzyk (TU Dortmund University), Markus Dürmuth (Leibniz University Hannover)

Read More

Vulnerability, Where Art Thou? An Investigation of Vulnerability Management...

Daniel Klischies (Ruhr University Bochum), Philipp Mackensen (Ruhr University Bochum), Veelasha Moonsamy (Ruhr University Bochum)

Read More

Compiled Models, Built-In Exploits: Uncovering Pervasive Bit-Flip Attack Surfaces...

Yanzuo Chen (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Yuanyuan Yuan (The Hong Kong University of Science and Technology), Sihang Hu (Huawei Technologies), Tianxiang Li (Huawei Technologies), Shuai Wang (The Hong Kong University of Science and Technology)

Read More