Xue Tan (Fudan University), Hao Luan (Fudan University), Mingyu Luo (Fudan University), Zhuyang Yu (Fudan University), Jun Dai (Worcester Polytechnic Institute), Xiaoyan Sun (Worcester Polytechnic Institute), Ping Chen (Fudan University)

With the rapid development of Large Language Models (LLMs), their applications have expanded across various aspects of daily life. Open-source LLMs, in particular, have gained popularity due to their accessibility, resulting in widespread downloading and redistribution. The impressive capabilities of LLMs results from training on massive and often undisclosed datasets. This raises the question of whether sensitive content such as copyrighted or personal data is included, which is known as the membership inference problem. Existing methods mainly rely on model outputs and overlook rich internal representations. Limited access to internal data leads to suboptimal results, revealing a research gap for membership inference in open-source white-box LLMs.

In this paper, we address the challenge of detecting the training data of open-source LLMs. To support this investigation, we introduce three dynamic benchmarks: textit{WikiTection}, textit{NewsTection}, and textit{ArXivTection}. We then propose a white-box approach for training data detection by analyzing neural activations of LLMs. Our key insight is that the neuron activations across all layers of LLM reflect the internal representation of knowledge related to the input data within the LLM, which can effectively distinguish between training data and non-training data of LLM. Extensive experiments on these benchmarks demonstrate the strong effectiveness of our approach. For instance, on the textit{WikiTection} benchmark, our method achieves an AUC of around 0.98 across five LLMs: textit{GPT2-xl}, textit{LLaMA2-7B}, textit{LLaMA3-8B}, textit{Mistral-7B}, and textit{LLaMA2-13B}. Additionally, we conducted in-depth analysis on factors such as model size, input length, and text paraphrasing, further validating the robustness and adaptability of our method.

View More Papers

Les Dissonances: Cross-Tool Harvesting and Polluting in Pool-of-Tools Empowered...

Zichuan Li (University of Illinois Urbana-Champaign), Jian Cui (University of Illinois Urbana-Champaign), Xiaojing Liao (University of Illinois Urbana-Champaign), Luyi Xing (University of Illinois Urbana-Champaign)

Read More

Losing the Beat: Understanding and Mitigating Desynchronization Risks in...

Zhi Li (Huazhong University of Science and Technology), Zhen Xu (Huazhong University of Science and Technology), Weijie Liu (Nankai University), XiaoFeng Wang (Nanyang Technological University), Hai Jin (Huazhong University of Science and Technology), Zheli Liu (Nankai University)

Read More

Cache Me, Catch You: Cache Related Security Threats in...

XiangFan Wu (Ocean University of China; QI-ANXIN Technology Research Institute), Lingyun Ying (QI-ANXIN Technology Research Institute), Guoqiang Chen (QI-ANXIN Technology Research Institute), Yacong Gu (Tsinghua University; Tsinghua University-QI-ANXIN Group JCNS), Haipeng Qu (Department of Computer Science and Technology, Ocean University of China)

Read More