Amrita Roy Chowdhury (University of Michigan, Ann Arbor), David Glukhov (University of Toronto and Vector Institute), Divyam Anshumaan (University of Wisconsin-Madison), Prasad Chalasani (Langroid Incorporated), Nicholas Papernot (University of Toronto and Vector Institute), Somesh Jha (University of Wisconsin-Madison), Mihir Bellare (University of California, San Diego)

The rise of large language models (LLMs) has introduced new privacy challenges, particularly during inference where sensitive information in prompts may be exposed to proprietary LLM APIs. In this paper, we address the problem of formally protecting the sensitive information contained in a prompt while maintaining response quality. To this end, first, we introduce a cryptographically inspired notion of a prompt sanitizer which transforms an input prompt to protect its sensitive tokens. Second, we propose Prϵϵmpt, a novel system that implements a prompt sanitizer, focusing on the sensitive information that can be derived solely from the individual tokens. Prϵϵmpt categorizes sensitive tokens into two types: (1) those where the LLM’s response depends solely on the format (such as SSNs, credit card numbers), for which we use format-preserving encryption (FPE); and (2) those where the response depends on specific values, (such as age, salary) for which we apply metric differential privacy (mDP). Our evaluation demonstrates that Prϵϵmpt is a practical method to achieve meaningful privacy guarantees, while maintaining high utility compared to unsanitized prompts, and outperforming prior methods.

View More Papers

FirmCross: Detecting Taint-style Vulnerabilities in Modern C-Lua Hybrid Web...

Runhao Liu (National University of Defense Technology), Jiarun Dai (Fudan University), Haoyu Xiao (Fudan University), Yuan Zhang (Fudan University), Yeqi Mou (National University of Defense Technology), Lukai Xu (National University of Defense Technology), Bo Yu (National University of Defense Technology), Baosheng Wang (National University of Defense Technology), Min Yang (Fudan University)

Read More

From Paranoia to Compliance: The Bumpy Road of System...

Niklas Busch (CISPA Helmholtz Center for Information Security, Germany), Philip Klostermeyer (CISPA Helmholtz Center for Information Security, Germany), Jan H. Klemmer (CISPA Helmholtz Center for Information Security, Germany), Yasemin Acar (Paderborn University, Germany), Sascha Fahl (CISPA Helmholtz Center for Information Security, Germany)

Read More

HOUSTON: Real-Time Anomaly Detection of Attacks against Ethereum DeFi...

Dongyu Meng (University of California, Santa Barbara), Fabio Gritti (University of California, Santa Barbara), Robert McLaughlin (University of California, Santa Barbara), Nicola Ruaro (University of California, Santa Barbara), Ilya Grishchenko (University of Toronto), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara)

Read More