Rui Zeng (Zhejiang University), Xi Chen (Zhejiang University), Yuwen Pu (Zhejiang University), Xuhong Zhang (Zhejiang University), Tianyu Du (Zhejiang University), Shouling Ji (Zhejiang University)

Backdoors can be injected into NLP models to induce misbehavior when the input text contains a specific feature, known as a trigger, which the attacker secretly selects. Unlike fixed tokens, words, phrases, or sentences used in the textit{static} text trigger, textit{dynamic} backdoor attacks on NLP models design triggers associated with abstract and latent text features (e.g., style), making them considerably stealthier than traditional static backdoor attacks. However, existing research on NLP backdoor detection primarily focuses on defending against static backdoor attacks, while research on detecting dynamic backdoors in NLP models remains largely unexplored.

This paper presents CLIBE, the first framework to detect dynamic backdoors in Transformer-based NLP models. At a high level, CLIBE injects a textit{"few-shot perturbation"} into the suspect Transformer model by crafting an optimized weight perturbation in the attention layers to make the perturbed model classify a limited number of reference samples as a target label. Subsequently, CLIBE leverages the textit{generalization} capability of this "few-shot perturbation" to determine whether the original suspect model contains a dynamic backdoor. Extensive evaluation on three advanced NLP dynamic backdoor attacks, two widely-used Transformer frameworks, and four real-world classification tasks strongly validates the effectiveness and generality of CLIBE. We also demonstrate the robustness of CLIBE against various adaptive attacks. Furthermore, we employ CLIBE to scrutinize 49 popular Transformer models on Hugging Face and discover one model exhibiting a high probability of containing a dynamic backdoor. We have contacted Hugging Face and provided detailed evidence of the backdoor behavior of this model. Moreover, we show that CLIBE can be easily extended to detect backdoor text generation models (e.g., GPT-Neo-1.3B) that are modified to exhibit toxic behavior. To the best of our knowledge, CLIBE is the first framework capable of detecting backdoors in text generation models without requiring access to trigger input test samples. The code is available at https://github.com/Raytsang123/CLIBE.

View More Papers

Be Careful of What You Embed: Demystifying OLE Vulnerabilities

Yunpeng Tian (Huazhong University of Science and Technology), Feng Dong (Huazhong University of Science and Technology), Haoyi Liu (Huazhong University of Science and Technology), Meng Xu (University of Waterloo), Zhiniang Peng (Huazhong University of Science and Technology; Sangfor Technologies Inc.), Zesen Ye (Sangfor Technologies Inc.), Shenghui Li (Huazhong University of Science and Technology), Xiapu Luo…

Read More

“Where Are We On Cyber?” – A Qualitative Study...

Jens Christian Opdenbusch (Ruhr University Bochum), Jonas Hielscher (Ruhr University Bochum), M. Angela Sasse (Ruhr University Bochum, University College London)

Read More

AI-Assisted RF Fingerprinting for Identification of User Devices in...

Aishwarya Jawne (Center for Connected Autonomy & AI, Florida Atlantic University), Georgios Sklivanitis (Center for Connected Autonomy & AI, Florida Atlantic University), Dimitris A. Pados (Center for Connected Autonomy & AI, Florida Atlantic University), Elizabeth Serena Bentley (Air Force Research Laboratory)

Read More

Understanding Data Importance in Machine Learning Attacks: Does Valuable...

Rui Wen (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More