Hengzhi Pei (UIUC), Jinyuan Jia (UIUC, Penn State), Wenbo Guo (UC Berkeley, Purdue University), Bo Li (UIUC), Dawn Song (UC Berkeley)

Backdoor attacks have become a major security threat for deploying machine learning models in security-critical applications. Existing research endeavors have proposed many defenses against backdoor attacks. Despite demonstrating certain empirical defense efficacy, none of these techniques could provide a formal and provable security guarantee against arbitrary attacks. As a result, they can be easily broken by strong adaptive attacks, as shown in our evaluation. In this work, we propose TextGuard, the first provable defense against backdoor attacks on text classification. In particular, TextGuard first divides the (backdoored) training data into sub-training sets, achieved by splitting each training sentence into sub-sentences. This partitioning ensures that a majority of the sub-training sets do not contain the backdoor trigger. Subsequently, a base classifier is trained from each sub-training set, and their ensemble provides the final prediction. We theoretically prove that when the length of the backdoor trigger falls within a certain threshold, TextGuard guarantees that its prediction will remain unaffected by the presence of the triggers in training and testing inputs. In our evaluation, we demonstrate the effectiveness of TextGuard on three benchmark text classification tasks, surpassing the certification accuracy of existing certified defenses against backdoor attacks. Furthermore, we propose additional strategies to enhance the empirical performance of TextGuard. Comparisons with state-of-the-art empirical defenses validate the superiority of TextGuard in countering multiple backdoor attacks. Our code and data are available at https://github.com/AI-secure/TextGuard.

View More Papers

A Cross-Verification Approach with Publicly Available Map for Detecting...

Takami Sato, Ningfei Wang (University of California, Irvine), Yueqiang Cheng (NIO Security Research), Qi Alfred Chen (University of California, Irvine)

Read More

MOCK: Optimizing Kernel Fuzzing Mutation with Context-aware Dependency

Jiacheng Xu (Zhejiang University), Xuhong Zhang (Zhejiang University), Shouling Ji (Zhejiang University), Yuan Tian (UCLA), Binbin Zhao (Georgia Institute of Technology), Qinying Wang (Zhejiang University), Peng Cheng (Zhejiang University), Jiming Chen (Zhejiang University)

Read More

Automatic Adversarial Adaption for Stealthy Poisoning Attacks in Federated...

Torsten Krauß (University of Würzburg), Jan König (University of Würzburg), Alexandra Dmitrienko (University of Wuerzburg), Christian Kanzow (University of Würzburg)

Read More

LARMix: Latency-Aware Routing in Mix Networks

Mahdi Rahimi (KU Leuven), Piyush Kumar Sharma (KU Leuven), Claudia Diaz (KU Leuven)

Read More