Yuhao Wu (Washington University in St. Louis), Franziska Roesner (University of Washington), Tadayoshi Kohno (University of Washington), Ning Zhang (Washington University in St. Louis), Umar Iqbal (Washington University in St. Louis)

Large language models (LLMs) extended as systems, such as ChatGPT, have begun supporting third-party applications. These LLM apps leverage the de facto natural language-based automated execution paradigm of LLMs: that is, apps and their interactions are defined in natural language, provided access to user data, and allowed to freely interact with each other and the system. These LLM app ecosystems resemble the settings of earlier computing platforms, where there was insufficient isolation between apps and the system. Because third-party apps may not be trustworthy, and exacerbated by the imprecision of natural language interfaces, the current designs pose security and privacy risks for users. In this paper, we evaluate whether these issues can be addressed through execution isolation and what that isolation might look like in the context of LLM-based systems, where there are arbitrary natural language-based interactions between system components, between LLM and apps, and between apps. To that end, we propose IsolateGPT, a design architecture that demonstrates the feasibility of execution isolation and provides a blueprint for implementing isolation, in LLM-based systems. We evaluate IsolateGPT against a number of attacks and demonstrate that it protects against many security, privacy, and safety issues that exist in non-isolated LLM-based systems, without any loss of functionality. The performance overhead incurred by IsolateGPT to improve security is under 30% for three-quarters of tested queries.

View More Papers

Hidden and Lost Control: on Security Design Risks in...

Haoqiang Wang, Yiwei Fang (Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences; Indiana University Bloomington), Yichen Liu (Indiana University Bloomington), Ze Jin (Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences; Indiana University Bloomington), Emma Delph…

Read More

CounterSEVeillance: Performance-Counter Attacks on AMD SEV-SNP

Stefan Gast (Graz University of Technology), Hannes Weissteiner (Graz University of Technology), Robin Leander Schröder (Fraunhofer SIT, Darmstadt, Germany and Fraunhofer Austria, Vienna, Austria), Daniel Gruss (Graz University of Technology)

Read More

Trim My View: An LLM-Based Code Query System for...

Sima Arasteh (University of Southern California), Pegah Jandaghi, Nicolaas Weideman (University of Southern California/Information Sciences Institute), Dennis Perepech, Mukund Raghothaman (University of Southern California), Christophe Hauser (Dartmouth College), Luis Garcia (University of Utah Kahlert School of Computing)

Read More