Zhifan Luo (State Key Laboratory of Blockchain and Data Security, Zhejiang University), Shuo Shao (State Key Laboratory of Blockchain and Data Security, Zhejiang University), Su Zhang (Huawei Technology), Lijing Zhou (Huawei Technology), Yuke Hu (State Key Laboratory of Blockchain and Data Security, Zhejiang University), Chenxu Zhao (State Key Laboratory of Blockchain and Data Security, Zhejiang University), Zhihao Liu (State Key Laboratory of Blockchain and Data Security, Zhejiang University), Zhan Qin (State Key Laboratory of Blockchain and Data Security, Zhejiang University and Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security)

The Key-Value (KV) cache, which stores intermediate attention computations (Key and Value pairs) to avoid redundant calculations, is a fundamental mechanism for accelerating Large Language Model (LLM) inference. However, this efficiency optimization introduces significant yet underexplored privacy risks. This paper provides the first comprehensive analysis of these vulnerabilities, demonstrating that an adversary can reconstruct sensitive user inputs directly from the KV-cache. We design and implement three distinct attack vectors: a direct Inversion Attack, a more broadly applicable and potent Collision Attack, and a semantic-based Injection Attack. These methods demonstrate the practicality and severity of KV-cache privacy leakage issues. To mitigate this, we propose KV-Cloak, a novel, lightweight, and efficient defense mechanism. KV-Cloak uses a reversible matrix-based obfuscation scheme, combined with operator fusion, to secure the KV-cache. Our extensive experiments show that KV-Cloak effectively thwarts all proposed attacks, reducing reconstruction quality to random noise. Crucially, it achieves this robust security with virtually no degradation in model accuracy and minimal performance overhead, offering a practical solution for trustworthy LLM deployment.

View More Papers

15 Years of Binary Analysis – What worked, and...

Marion Marschalek, Hack & Cheese Security Consulting

Read More

What Do They Fix? LLM-Aided Categorization of Security Patches...

Xingyu Li (UC Riverside), Juefei Pu (UC Riverside), Yifan Wu (UC Riverside), Xiaochen Zou (UC Riverside), Shitong Zhu (UC Riverside), Qiushi Wu (IBM), Zheng Zhang (UC Riverside), Joshua Hsu (UC Riverside), Yue Dong (UC Riverside), Zhiyun Qian (UC Riverside), Kangjie Lu (University of Minnesota), Trent Jaeger (UC Riverside), Michael De Lucia (U.S. Army Research Laboratory),…

Read More

Continuous User Behavior Monitoring using DNS Cache Timing Attacks

Hannes Weissteiner (Graz University of Technology, Graz, Austria), Roland Czerny (Graz University of Technology, Graz, Austria), Simone Franza (Graz University of Technology, Graz, Austria), Stefan Gast (Graz University of Technology, Graz, Austria), Johanna Ullrich (University of Vienna, Vienna, Austria), Daniel Gruss (Graz University of Technology, Graz, Austria)

Read More