Jing Shang (Beijing Jiaotong University), Jian Wang (Beijing Jiaotong University), Kailun Wang (Beijing Jiaotong University), Jiqiang Liu (Beijing Jiaotong University), Nan Jiang (Beijing University of Technology), Md Armanuzzaman (Northeastern University), Ziming Zhao (Northeastern University)

Model pruning is a technique for compressing deep learning models, and using an iterative way to prune the model can achieve better compression effects with lower utility loss. However, our analysis reveals that iterative pruning significantly increases model memorization, making the pruned models more vulnerable to membership inference attacks (MIAs). Unfortunately, the vast majority of existing defenses against MIAs are designed for original and unpruned models. In this paper, we propose a new framework WeMem to weaken memorization in the iterative pruning process. Specifically, our analysis identifies two important factors that increase memorization in iterative pruning, namely data reuse and inherent memorability. We consider the individual and combined impacts of both factors, forming three scenarios that lead to increased memorization in iteratively pruned models. We design three defense primitives based on these factors' characteristics. By combining these primitives, we propose methods tailored to each scenario to weaken memorization effectively. Comprehensive experiments under ten adaptive MIAs demonstrate the effectiveness of the proposed defenses. Moreover, our defenses outperform five existing defenses in terms of privacy-utility tradeoff and efficiency. Additionally, we enhance the proposed defenses to automatically adjust settings for optimal defense, improving their practicability.

View More Papers

Lend Me Your Beam: Privacy Implications of Plaintext Beamforming...

Rui Xiao (Zhejiang University), Xiankai Chen (Zhejiang University), Yinghui He (Nanyang Technological University), Jun Han (KAIST), Jinsong Han (Zhejiang University)

Read More

URVFL: Undetectable Data Reconstruction Attack on Vertical Federated Learning

Duanyi Yao (Hong Kong University of Science and Technology), Songze Li (Southeast University), Xueluan Gong (Wuhan University), Sizai Hou (Hong Kong University of Science and Technology), Gaoning Pan (Hangzhou Dianzi University)

Read More

BULKHEAD: Secure, Scalable, and Efficient Kernel Compartmentalization with PKS

Yinggang Guo (State Key Laboratory for Novel Software Technology, Nanjing University; University of Minnesota), Zicheng Wang (State Key Laboratory for Novel Software Technology, Nanjing University), Weiheng Bai (University of Minnesota), Qingkai Zeng (State Key Laboratory for Novel Software Technology, Nanjing University), Kangjie Lu (University of Minnesota)

Read More