Guanlong Wu (Southern University of Science and Technology), Zheng Zhang (ByteDance Inc.), Yao Zhang (ByteDance Inc.), Weili Wang (Southern University of Science and Technolog), Jianyu Niu (Southern University of Science and Technolog), Ye Wu (ByteDance Inc.), Yinqian Zhang (Southern University of Science and Technology (SUSTech))

Large Language Models (LLMs), which laid the groundwork for Artificial General Intelligence (AGI), have recently gained significant traction in academia and industry due to their disruptive applications. In order to enable scalable applications and efficient resource management, various multi-tenant LLM serving frameworks have been proposed, in which the LLM caters to the needs of multiple users simultaneously. One notable mechanism in recent works, such as SGLang and vLLM, is sharing the Key-Value (KV) cache for identical token sequences among multiple users, saving both memory and computation. This paper presents the first investigation on security risks
associated with multi-tenant LLM serving. We show that the state-of-the-art mechanisms of KV cache sharing may lead to new side channel attack vectors, allowing unauthorized reconstruction
of user prompts and compromising sensitive user information among mutually distrustful users. Specifically, we introduce our attack, PROMPTPEEK, and apply it to three scenarios where the
adversary, with varying degrees of prior knowledge, is capable of reverse-engineering prompts from other users. This study underscores the need for careful resource management in multi-tenant LLM serving and provides critical insights for future security enhancement.

View More Papers

QMSan: Efficiently Detecting Uninitialized Memory Errors During Fuzzing

Matteo Marini (Sapienza University of Rome), Daniele Cono D'Elia (Sapienza University of Rome), Mathias Payer (EPFL), Leonardo Querzoni (Sapienza University of Rome)

Read More

OrbID: Identifying Orbcomm Satellite RF Fingerprints

Cédric Solenthaler (ETH Zurich), Joshua Smailes (University of Oxford), Martin Strohmeier (armasuisse Science & Technology)

Read More

Uncovering the iceberg from the tip: Generating API Specifications...

Miaoqian Lin (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Yi Yang (Institute of Information Engineering, Chinese Academy of…

Read More

How Different Tokenization Algorithms Impact LLMs and Transformer Models...

Ahmed Mostafa, Raisul Arefin Nahid, Samuel Mulder (Auburn University)

Read More