Jing Shang (Beijing Jiaotong University), Jian Wang (Beijing Jiaotong University), Kailun Wang (Beijing Jiaotong University), Jiqiang Liu (Beijing Jiaotong University), Nan Jiang (Beijing University of Technology), Md Armanuzzaman (Northeastern University), Ziming Zhao (Northeastern University)

Model pruning is a technique for compressing deep learning models, and using an iterative way to prune the model can achieve better compression effects with lower utility loss. However, our analysis reveals that iterative pruning significantly increases model memorization, making the pruned models more vulnerable to membership inference attacks (MIAs). Unfortunately, the vast majority of existing defenses against MIAs are designed for original and unpruned models. In this paper, we propose a new framework WeMem to weaken memorization in the iterative pruning process. Specifically, our analysis identifies two important factors that increase memorization in iterative pruning, namely data reuse and inherent memorability. We consider the individual and combined impacts of both factors, forming three scenarios that lead to increased memorization in iteratively pruned models. We design three defense primitives based on these factors' characteristics. By combining these primitives, we propose methods tailored to each scenario to weaken memorization effectively. Comprehensive experiments under ten adaptive MIAs demonstrate the effectiveness of the proposed defenses. Moreover, our defenses outperform five existing defenses in terms of privacy-utility tradeoff and efficiency. Additionally, we enhance the proposed defenses to automatically adjust settings for optimal defense, improving their practicability.

View More Papers

“Where Are We On Cyber?” – A Qualitative Study...

Jens Christian Opdenbusch (Ruhr University Bochum), Jonas Hielscher (Ruhr University Bochum), M. Angela Sasse (Ruhr University Bochum, University College London)

Read More

Work-in-Progress: Towards Browser-Based Consent Management

Gayatri Priyadarsini Kancherla and Abhishek Bichhawat (Indian Institute of Technology Gandhinagar)

Read More

What’s Done Is Not What’s Claimed: Detecting and Interpreting...

Chang Yue (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Zhixiu Guo (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Jun Dai, Xiaoyan Sun (Department of Computer Science, Worcester Polytechnic Institute), Yi Yang (Institute of Information Engineering, Chinese Academy…

Read More