Zhifan Luo (Zhejiang University), Shuo Shao (Zhejiang University), Su Zhang (Huawei Technology), Lijing Zhou (Huawei Technology), Yuke Hu (Zhejiang University), Chenxu Zhao (Zhejiang University), Zhihao Liu (Zhejiang University), Zhan Qin (Zhejiang University)

The Key-Value (KV) cache, which stores intermediate attention computations (Key and Value pairs) to avoid redundant calculations, is a fundamental mechanism for accelerating Large Language Model (LLM) inference. However, this efficiency optimization introduces significant yet underexplored privacy risks. This paper provides the first comprehensive analysis of these vulnerabilities, demonstrating that an attacker can reconstruct sensitive user inputs directly from the KV-cache. We design and implement three distinct attack vectors: a direct Inversion Attack, a more broadly applicable and potent Collision Attack, and a semantic-based Injection Attack. These methods demonstrate the practicality and severity of KV-cache privacy leakage issues. To mitigate this, we propose KV-Cloak, a novel, lightweight, and efficient defense mechanism. KV-Cloak uses a reversible matrix-based obfuscation scheme, combined with operator fusion, to secure the KV-cache. Our extensive experiments show that KV-Cloak effectively thwarts all proposed attacks, reducing reconstruction quality to random noise. Crucially, it achieves this robust security with virtually no degradation in model accuracy and minimal performance overhead, offering a practical solution for trustworthy LLM deployment.

View More Papers

Mirage: Private, Mobility-based Routing for Censorship Evasion

Zachary Ratliff (Harvard University), Ruoxing (David) Yang (Georgetown University), Avery Bai (Georgetown University), Harel Berger (Ariel University), Micah Sherr (Georgetown University), James Mickens (Harvard University)

Read More

Wall-PROV: Revisiting Firewall Rule Misconfigurations with Data Provenance and...

Abdullah Al Farooq (Wentworth Institute of Technology), Tanvir Rahman Akash (Trine University), Manash Sarker (Patuakhali Science and Technology University)

Read More

PrivCode: When Code Generation Meets Differential Privacy

Zheng Liu (University of Virginia), Chen Gong (University of Virginia), Terry Yue Zhuo (Monash University and CSIRO's Data61), Kecen Li (University of Virginia), Weichen Yu (Carnegie Mellon University), Matt Fredrikson (Carnegie Mellon University), Tianhao Wang (University of Virginia)

Read More