Amrita Roy Chowdhury (University of Michigan, Ann Arbor), David Glukhov (University of Toronto), Divyam Anshumaan (University of Wisconsin), Prasad Chalasani (Langroid), Nicholas Papernot (University of Toronto), Somesh Jha (University of Wisconsin), Mihir Bellare (UCSD)

The rise of large language models (LLMs) has introduced new privacy challenges, particularly during textit{inference} where sensitive information in prompts may be exposed to proprietary LLM APIs. In this paper, we address the problem of formally protecting the sensitive information contained in a prompt while maintaining response quality. To this end, first, we introduce a cryptographically inspired notion of a textit{prompt sanitizer} which transforms an input prompt to protect its sensitive tokens. Second, we propose Pr$epsilonepsilon$mpt, a novel system that implements a prompt sanitizer, focusing on the sensitive information that can be derived solely from the individual tokens. Pr$epsilonepsilon$mpt categorizes sensitive tokens into two types: (1) those where the LLM's response depends solely on the format (such as SSNs, credit card numbers), for which we use format-preserving encryption (FPE); and (2) those where the response depends on specific values, (such as age, salary) for which we apply metric differential privacy (mDP). Our evaluation demonstrates that Pr$epsilonepsilon$mpt is a practical method to achieve meaningful privacy guarantees, while maintaining high utility compared to unsanitized prompts, and outperforming prior methods.

View More Papers

Achieving Zen: Combining Mathematical and Programmatic Deep Learning Model...

David Oygenblik (Georgia Institute of Technology), Dinko Dermendzhiev (Georgia Institute of Technology), Filippos Sofias (Georgia Institute of Technology), Mingxuan Yao (Georgia Institute of Technology), Haichuan Xu (Georgia Institute of Technology), Runze Zhang (Georgia Institute of Technology), Jeman Park (Kyung Hee University), Amit Kumar Sikder (Iowa State University), Brendan Saltaformaggio (Georgia Institute of Technology)

Read More

OCCUPY+PROBE: Cross-Privilege Branch Target Buffer Side-Channel Attacks at Instruction...

Kaiyuan Rong (Tsinghua University, Zhongguancun Laboratory), Junqi Fang (Tsinghua University, Zhongguancun Laboratory), Haixia Wang (Tsinghua University), Dapeng Ju (Tsinghua University, Zhongguancun Laboratory), Dongsheng Wang (Tsinghua University, Zhongguancun Laboratory)

Read More

Cease at the Ultimate Goodness: Towards Efficient Website Fingerprinting...

Rong Wang (Southeast University), Zhen Ling (Southeast University), Guangchi Liu (Southeast University), Shaofeng Li (Southeast University), Junzhou Luo (Southeast University), Xinwen Fu (University of Massachusetts Lowell)

Read More