Amrita Roy Chowdhury (University of Michigan, Ann Arbor), David Glukhov (University of Toronto and Vector Institute), Divyam Anshumaan (University of Wisconsin-Madison), Prasad Chalasani (Langroid Incorporated), Nicholas Papernot (University of Toronto and Vector Institute), Somesh Jha (University of Wisconsin-Madison), Mihir Bellare (University of California, San Diego)

The rise of large language models (LLMs) has introduced new privacy challenges, particularly during inference where sensitive information in prompts may be exposed to proprietary LLM APIs. In this paper, we address the problem of formally protecting the sensitive information contained in a prompt while maintaining response quality. To this end, first, we introduce a cryptographically inspired notion of a prompt sanitizer which transforms an input prompt to protect its sensitive tokens. Second, we propose Prϵϵmpt, a novel system that implements a prompt sanitizer, focusing on the sensitive information that can be derived solely from the individual tokens. Prϵϵmpt categorizes sensitive tokens into two types: (1) those where the LLM’s response depends solely on the format (such as SSNs, credit card numbers), for which we use format-preserving encryption (FPE); and (2) those where the response depends on specific values, (such as age, salary) for which we apply metric differential privacy (mDP). Our evaluation demonstrates that Prϵϵmpt is a practical method to achieve meaningful privacy guarantees, while maintaining high utility compared to unsanitized prompts, and outperforming prior methods.

View More Papers

Beyond Jailbreak: Unveiling Risks in LLM Applications Arising from...

Yunyi Zhang (Tsinghua University), Shibo Cui (Tsinghua University), Baojun Liu (Tsinghua University), Jingkai Yu (Tsinghua University), Min Zhang (National University of Defense Technology), Fan Shi (National University of Defense Technology), Han Zheng (TrustAl Pte. Ltd.)

Read More

Enabling Research Extensions in Matter via Custom Clusters

Ravindra Mangar (Dartmouth College, Hanover), Jared Chandler (Dartmouth College, Hanover), Timothy J. Pierson (Dartmouth College, Hanover), David Kotz (Dartmouth College, Hanover)

Read More

ropbot: Reimaging Code Reuse Attack Synthesis

Kyle Zeng (Arizona State University), Moritz Schloegel (CISPA Helmholtz Center for Information Security), Christopher Salls (UC Santa Barbara), Adam Doupé (Arizona State University), Ruoyu Wang (Arizona State University), Yan Shoshitaishvili (Arizona State University), Tiffany Bao (Arizona State University)

Read More