Yuefeng Peng (University of Massachusetts Amherst), Ali Naseh (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

Deep learning models, while achieving remarkable performances across various tasks, are vulnerable to membership inference attacks (MIAs), wherein adversaries identify if a specific data point was part of the model's training set. This susceptibility raises substantial privacy concerns, especially when models are trained on sensitive datasets. Although various defenses have been proposed, there is still substantial room for improvement in the privacy-utility trade-off. In this work, we introduce a novel defense framework against MIAs by leveraging generative models. The key intuition of our defense is to *remove the differences between member and non-member inputs*, which is exploited by MIAs, by re-generating input samples before feeding them to the target model. Therefore, our defense, called Diffence, works *pre inference*, which is unlike prior defenses that are either training-time (modify the model) or post-inference time (modify the model's output).

A unique feature of Diffence is that it works on input samples only, without modifying the training or inference phase of the target model. Therefore, it can be *cascaded with other defense mechanisms* as we demonstrate through experiments. Diffence is specifically designed to preserve the model's prediction labels for each sample, thereby not affecting accuracy. Furthermore, we have empirically demonstrated that it does not reduce the usefulness of the confidence vectors. Through extensive experimentation, we show that Diffence can serve as a robust plug-n-play defense mechanism, enhancing membership privacy without compromising model utility—both in terms of accuracy and the usefulness of confidence vectors—across standard and defended settings. For instance, Diffence reduces MIA attack accuracy against an undefended model by 15.8% and attack AUC by 14.0% on average across three datasets, all without impacting model utility. By integrating Diffence with prior defenses, we can achieve new state-of-the-art performances in the privacy-utility trade-off. For example, when combined with the state-of-the-art SELENA defense it reduces attack accuracy by 9.3%, and attack AUC by 10.0%. Diffence achieves this by imposing a negligible computation overhead, adding only 57ms to the inference time per sample processed on average.

View More Papers

BULKHEAD: Secure, Scalable, and Efficient Kernel Compartmentalization with PKS

Yinggang Guo (State Key Laboratory for Novel Software Technology, Nanjing University; University of Minnesota), Zicheng Wang (State Key Laboratory for Novel Software Technology, Nanjing University), Weiheng Bai (University of Minnesota), Qingkai Zeng (State Key Laboratory for Novel Software Technology, Nanjing University), Kangjie Lu (University of Minnesota)

Read More

Truman: Constructing Device Behavior Models from OS Drivers to...

Zheyu Ma (Institute for Network Sciences and Cyberspace (INSC), Tsinghua University; EPFL; JCSS, Tsinghua University (INSC) - Science City (Guangzhou) Digital Technology Group Co., Ltd.), Qiang Liu (EPFL), Zheming Li (Institute for Network Sciences and Cyberspace (INSC), Tsinghua University; JCSS, Tsinghua University (INSC) - Science City (Guangzhou) Digital Technology Group Co., Ltd.), Tingting Yin (Zhongguancun…

Read More

DShield: Defending against Backdoor Attacks on Graph Neural Networks...

Hao Yu (National University of Defense Technology), Chuan Ma (Chongqing University), Xinhang Wan (National University of Defense Technology), Jun Wang (National University of Defense Technology), Tao Xiang (Chongqing University), Meng Shen (Beijing Institute of Technology, Beijing, China), Xinwang Liu (National University of Defense Technology)

Read More

GadgetMeter: Quantitatively and Accurately Gauging the Exploitability of Speculative...

Qi Ling (Purdue University), Yujun Liang (Tsinghua University), Yi Ren (Tsinghua University), Baris Kasikci (University of Washington and Google), Shuwen Deng (Tsinghua University)

Read More