Ruixuan Liu (Emory University), Toan Tran (Emory University), Tianhao Wang (University of Virginia), Hongsheng Hu (Shanghai Jiao Tong University), Shuo Wang (Shanghai Jiao Tong University), Li Xiong (Emory University)

As large language models increasingly memorize web-scraped training content, they risk exposing copyrighted or private information. Existing protections require compliance from crawlers or model developers, fundamentally limiting their effectiveness. We propose ExpShield, a proactive self-guard that mitigates memorization while maintaining readability via invisible perturbations, and we formulate it as a constrained optimization problem. Due to the lack of an individual-level risk metric for natural text, we first propose instance exploitation, a metric that measures how much training on a specific text increases the chance of guessing that text from a set of candidates—with zero indicating perfect defense. Directly solving the problem is infeasible for defenders without sufficient knowledge, thus we develop two effective proxy solutions: single-level optimization and synthetic perturbation. To enhance the defense, we reveal and verify the memorization trigger hypothesis, which can help to identify key tokens for memorization. Leveraging this insight, we design targeted perturbations that (i) neutralize inherent trigger tokens to reduce memorization and (ii) introduce artificial trigger tokens to misdirect model memorization. Experiments validate our defense across attacks, model scales, and tasks in language and vision-to-language modeling. Even with privacy backdoor, the Membership Inference Attack (MIA) AUC drops from 0.95 to 0.55 under the defense, and the instance exploitation approaches zero. This suggests that compared to the ideal no-misuse scenario, the risk of exposing a text instance remains nearly unchanged despite its inclusion in the training data.

View More Papers

BPA-X: An Architecture-Agnostic Block-Based Points-to Analysis for Stripped Binaries

Bokai Zhang, Monika Santra, Syed Rafiul Hussain, Gang Tan (Pennsylvania State University)

Read More

UDIM: Formal User-Device Interaction Model for Approximating Artifact Coverage...

Maximilian Eichhorn (Friedrich-Alexander-Universitat Erlangen-Nurnberg), Andreas Hammer (Friedrich-Alexander-Universitat Erlangen-Nurnberg), Gaston Pugliese (Friedrich-Alexander-Universitat Erlangen-Nurnberg), Felix Freiling (Friedrich-Alexander-Universitat Erlangen-Nurnberg)

Read More

FlyTrap: Physical Distance-Pulling Attack Towards Camera-based Autonomous Target Tracking...

Shaoyuan Xie (University of California, Irvine), Mohamad Habib Fakih (University of California, Irvine), Junchi Lu (University of California, Irvine), Fayzah Alshammari (University of California, Irvine), Ningfei Wang (University of California, Irvine), Takami Sato (University of California, Irvine), Halima Bouzidi (University of California Irvine), Mohammad Abdullah Al Faruque (University of California, Irvine), Qi Alfred Chen (University…

Read More