Zhexi Lu (Rensselaer Polytechnic Institute), Hongliang Chi (Rensselaer Polytechnic Institute), Nathalie Baracaldo (IBM Research - Almaden), Swanand Ravindra Kadhe (IBM Research - Almaden), Yuseok Jeon (Korea University), Lei Yu (Rensselaer Polytechnic Institute)

Membership inference attacks (MIAs) pose a critical privacy threat to fine-tuned large language models (LLMs), especially when models are adapted to domain-specific tasks using sensitive data. While prior black-box MIA techniques rely on confidence scores or token likelihoods, these signals are often entangled with a sample’s intrinsic properties—such as content difficulty or rarity—leading to poor generalization and low signal-to-noise ratios. In this paper, we propose ICP-MIA, a novel MIA framework grounded in the theory of training dynamics, particularly the phenomenon of diminishing returns during optimization. We introduce the Optimization Gap as a fundamental signal of membership: at convergence, member samples exhibit minimal remaining loss-reduction potential, while non-members retain significant potential for further optimization. To estimate this gap in a black-box setting, we propose In-Context Probing (ICP)—a training-free method that simulates fine-tuning-like behavior via strategically constructed input contexts. We propose two probing strategies: reference-data-based (using semantically similar public samples) and self-perturbation (via masking or generation). Experiments on three tasks and multiple LLMs show that ICP-MIA significantly outperforms prior black-box MIAs, particularly at low false positive rates. We further analyze how reference data alignment, model type, PEFT configurations, and training schedules affect attack effectiveness. Our findings establish ICP-MIA as a practical and theoretically grounded framework for auditing privacy risks in deployed LLMs.

View More Papers

Usability Issues in Differential Privacy Libraries: A Study from...

Ravi Mahankali (University of Bristol), Jo Hallett (University of Bristol)

Read More

ThinkTrap: Denial-of-Service Attacks against Black-box LLM Services via Infinite...

Yunzhe Li (Shanghai Jiao Tong University), Jianan Wang (Shanghai Jiao Tong University), Hongzi Zhu (Shanghai Jiao Tong University), James Lin (Shanghai Jiao Tong University), Shan Chang (Donghua University), Minyi Guo (Shanghai Jiao Tong University)

Read More

NeuroStrike: Neuron-Level Attacks on Aligned LLMs

Lichao Wu (Technical University of Darmstadt), Sasha Behrouzi (Technical University of Darmstadt), Mohamadreza Rostami (Technical University of Darmstadt), Maximilian Thang (Technical University of Darmstadt), Stjepan Picek (University of Zagreb & Radboud University), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More