Yugeng Liu (CISPA Helmholtz Center for Information Security), Zheng Li (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yun Shen (Netapp), Yang Zhang (CISPA Helmholtz Center for Information Security)

Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.

View More Papers

Do Not Give a Dog Bread Every Time He...

Chongqing Lei (Southeast University), Zhen Ling (Southeast University), Yue Zhang (Jinan University), Kai Dong (Southeast University), Kaizheng Liu (Southeast University), Junzhou Luo (Southeast University), Xinwen Fu (University of Massachusetts Lowell)

Read More

Short: Certifiably Robust Perception Against Adversarial Patch Attacks: A...

Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

Read More

RAI2: Responsible Identity Audit Governing the Artificial Intelligence

Tian Dong (Shanghai Jiao Tong University), Shaofeng Li (Shanghai Jiao Tong University), Guoxing Chen (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Haojin Zhu (Shanghai Jiao Tong University), Zhen Liu (Shanghai Jiao Tong University)

Read More

Privacy-Preserving Database Fingerprinting

Tianxi Ji (Texas Tech University), Erman Ayday (Case Western Reserve University), Emre Yilmaz (University of Houston-Downtown), Ming Li (CSE Department The University of Texas at Arlington), Pan Li (Case Western Reserve University)

Read More