Ruixuan Liu (Emory University), Toan Tran (Emory University), Tianhao Wang (University of Virginia), Hongsheng Hu (Shanghai Jiao Tong University), Shuo Wang (Shanghai Jiao Tong University), Li Xiong (Emory University)

As large language models increasingly memorize web-scraped training content, they risk exposing copyrighted or private information. Existing protections require compliance from crawlers or model developers, fundamentally limiting their effectiveness. We propose ExpShield, a proactive self-guard that mitigates memorization while maintaining readability via invisible perturbations, and we formulate it as a constrained optimization problem. Due to the lack of an individual-level risk metric for natural text, we first propose instance exploitation, a metric that measures how much training on a specific text increases the chance of guessing that text from a set of candidates—with zero indicating perfect defense. Directly solving the problem is infeasible for defenders without sufficient knowledge, thus we develop two effective proxy solutions: single-level optimization and synthetic perturbation. To enhance the defense, we reveal and verify the memorization trigger hypothesis, which can help to identify key tokens for memorization. Leveraging this insight, we design targeted perturbations that (i) neutralize inherent trigger tokens to reduce memorization and (ii) introduce artificial trigger tokens to misdirect model memorization. Experiments validate our defense across attacks, model scales, and tasks in language and vision-to-language modeling. Even with privacy backdoor, the Membership Inference Attack (MIA) AUC drops from 0.95 to 0.55 under the defense, and the instance exploitation approaches zero. This suggests that compared to the ideal no-misuse scenario, the risk of exposing a text instance remains nearly unchanged despite its inclusion in the training data.

View More Papers

Benchmarking and Understanding Safety Risks in AI Character Platforms

Yiluo Wei (The Hong Kong University of Science and Technology (Guangzhou)), Peixian Zhang (The Hong Kong University of Science and Technology (Guangzhou)), Gareth Tyson (The Hong Kong University of Science and Technology (Guangzhou))

Read More

Work-in-progress: RegTrack: Uncovering Global Disparities in Third-party Advertising and...

Tanya Prasad (University of British Columbia), Rut Vora (University of British Columbia), Soo Yee Lim (University of British Columbia), Nguyen Phong Hoang (University of British Columbia), Thomas Pasquier (University of British Columbia)

Read More

From Scam to Safety: Participatory Design of Digital Privacy...

Sarah Tabassum (University of North Carolina at Charlotte, USA), Narges Zare (University of North Carolina at Charlotte, USA), Cori Faklaris(University of North Carolina at Charlotte, USA)

Read More