Xue Tan (Institute of Big Data, Fudan University, Shanghai, China and College of Computer Science and Artificial Intelligence, Fudan University, Shanghai, China), Hao Luan (Institute of Big Data, Fudan University, Shanghai, China and College of Computer Science and Artificial Intelligence, Fudan University, Shanghai, China), Mingyu Luo (Institute of Big Data, Fudan University, Shanghai, China and College of Computer Science and Artificial Intelligence, Fudan University, Shanghai, China), Zhuyang Yu (Institute of Big Data, Fudan University, Shanghai, China and College of Computer Science and Artificial Intelligence, Fudan University, Shanghai, China), Jun Dai (Department of Computer Science, Worcester Polytechnic Institute, MA, USA), Xiaoyan Sun (Department of Computer Science, Worcester Polytechnic Institute, MA, USA), Ping Chen (Institute of Big Data, Fudan University, Shanghai, China and Purple Mountain Laboratories, Nanjing, China)

With the rapid development of Large Language Models (LLMs), their applications have expanded across various aspects of daily life. Open-source LLMs, in particular, have gained popularity due to their accessibility, resulting in widespread downloading and redistribution. The impressive capabilities of LLMs results from training on massive and often undisclosed datasets. This raises the question of whether sensitive content such as copyrighted or personal data is included, which is known as the membership inference problem. Existing methods mainly rely on model outputs and overlook rich internal representations. Limited access to internal data leads to suboptimal results, revealing a research gap for membership inference in open-source white-box LLMs.

In this paper, we address the challenge of detecting the training data of open-source LLMs. To support this investigation, we introduce three dynamic benchmarks: WikiTection, NewsTection, and ArXivTection. We then propose a white-box approach for training data detection by analyzing neural activations of LLMs. Our key insight is that the neuron activations across all layers of LLM reflect the internal representation of knowledge related to the input data within the LLM, which can effectively distinguish between training data and non-training data of LLM. Extensive experiments on these benchmarks demonstrate the strong effectiveness of our approach. For instance, on the WikiTection benchmark, our method achieves an AUC of around 0.98 across five LLMs: GPT2-xl, LLaMA2-7B, LLaMA3-8B, Mistral-7B, and LLaMA2-13B. Additionally, we conducted in-depth analysis on factors such as model size, input length, and text paraphrasing, further validating the robustness and adaptability of our method.

View More Papers

Porting NASA's core Flight System to the Formally Verified...

Juliana Furgala (MIT Lincoln Laboratory), Samuel Jero (MIT Lincoln Laboratory), Andrea Lin (MIT Lincoln Laboratory), Rick Skowyra (MIT Lincoln Laboratory)

Read More

Enhancing Semantic-Aware Binary Diffing with High-Confidence Dynamic Instruction Alignment

Chengfeng Ye (The Hong Kong University of Science and Technology, China), Anshunkang Zhou (The Hong Kong University of Science and Technology, China), Charles Zhang (The Hong Kong University of Science and Technology, China)

Read More

LAPSE: Automatic, Formal Fault-Tolerant Correctness Proofs for Native Code

Charles Averill, Ilan Buzzetti (The University of Texas at Dallas), Alex Bellon (UC San Diego), Kevin Hamlen (The University of Texas at Dallas)

Read More