Behrad Tajalli (Radboud University), Stefanos Koffas (Delft University of Technology), Stjepan Picek (Radboud University)

Backdoor attacks in machine learning have drawn significant attention for their potential to compromise models stealthily, yet most research has focused on homogeneous data such as images. In this work, we propose a novel backdoor attack on tabular data, which is particularly challenging due to the presence of both numerical and categorical features.
Our key idea is a novel technique to convert categorical values into floating-point representations. This approach preserves enough information to maintain clean-model accuracy compared to traditional methods like one-hot or ordinal encoding. By doing this, we create a gradient-based universal perturbation that applies to all features, including categorical ones.

We evaluate our method on five datasets and four popular models. Our results show up to a 100% attack success rate in both white-box and black-box settings (including real-world applications like Vertex AI), revealing a severe vulnerability for tabular data. Our method is shown to surpass the previous works like Tabdoor in terms of performance, while remaining stealthy against state-of-the-art defense mechanisms. We evaluate our attack against Spectral Signatures, Neural Cleanse, Beatrix, and Fine-Pruning, all of which fail to defend successfully against it. We also verify that our attack successfully bypasses popular outlier detection mechanisms.

View More Papers

Abuse Resistant Traceability with Minimal Trust for Encrypted Messaging...

Zhongming Wang (Chongqing University), Tao Xiang (Chongqing University), Xiaoguo Li (Chongqing University), Guomin Yang (Singapore Management University), Biwen Chen (Chongqing University), Ze Jiang (Chongqing University), Jiacheng Wang (Nanyang Technological University), Chuan Ma (Chongqing University), Robert H. Deng (Singapore Management University)

Read More

Kick Bad Guys Out! Conditionally Activated Anomaly Detection in...

Shanshan Han (University of California, Irvine), Wenxuan Wu (Texas A&M University), Baturalp Buyukates (University of Birmingham), Weizhao Jin (University of Southern California), Qifan Zhang (Palo Alto Networks), Yuhang Yao (Carnegie Mellon University), Salman Avestimehr (University of Southern California)

Read More

Character-Level Perturbations Disrupt LLM Watermarks

Zhaoxi Zhang (University of Technology Sydney), Xiaomei Zhang (Griffith University), Yanjun Zhang (University of Technology Sydney), He Zhang (RMIT University), Shirui Pan (Griffith University), Bo Liu (University of Technology Sydney), Asif Gill (University of Technology Sydney Australia), Leo Yu Zhang (Griffith University)

Read More