Zhexi Lu (Rensselaer Polytechnic Institute), Hongliang Chi (Rensselaer Polytechnic Institute), Nathalie Baracaldo (IBM Research - Almaden), Swanand Ravindra Kadhe (IBM Research - Almaden), Yuseok Jeon (Korea University), Lei Yu (Rensselaer Polytechnic Institute)

Membership inference attacks (MIAs) pose a critical privacy threat to fine-tuned large language models (LLMs), especially when models are adapted to domain-specific tasks using sensitive data. While prior black-box MIA techniques rely on confidence scores or token likelihoods, these signals are often entangled with a sample’s intrinsic properties—such as content difficulty or rarity—leading to poor generalization and low signal-to-noise ratios. In this paper, we propose ICP-MIA, a novel MIA framework grounded in the theory of training dynamics, particularly the phenomenon of diminishing returns during optimization. We introduce the Optimization Gap as a fundamental signal of membership: at convergence, member samples exhibit minimal remaining loss-reduction potential, while non-members retain significant potential for further optimization. To estimate this gap in a black-box setting, we propose In-Context Probing (ICP)—a training-free method that simulates fine-tuning-like behavior via strategically constructed input contexts. We propose two probing strategies: reference-data-based (using semantically similar public samples) and self-perturbation (via masking or generation). Experiments on three tasks and multiple LLMs show that ICP-MIA significantly outperforms prior black-box MIAs, particularly at low false positive rates. We further analyze how reference data alignment, model type, PEFT configurations, and training schedules affect attack effectiveness. Our findings establish ICP-MIA as a practical and theoretically grounded framework for auditing privacy risks in deployed LLMs.

View More Papers

Light2Lie: Detecting Deepfake Images Using Physical Reflectance Laws

Kavita Kumari (Technical University of Darmstadt), Sasha Behrouzi (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

Revisiting Differentially Private Hyper-parameter Tuning

Zihang Xiang (KAUST), Tianhao Wang (University of Virginia), Cheng-Long Wang (King Abdullah University of Science and Technology), Di Wang (King Abdullah University of Science and Technology)

Read More

OSAVRoute: Advancing Outbound Source Address Validation Deployment Detection with...

Shuai Wang (Zhongguancun Laboratory), Ruifeng Li (Zhongguancun Laboratory), Li Chen (Zhongguancun Laboratory), Dan Li (Tsinghua University), Lancheng Qin (Zhongguancun Laboratory), Qian Cao (Zhongguancun Laboratory)

Read More