Yuhao Wu (Washington University in St. Louis), Franziska Roesner (University of Washington), Tadayoshi Kohno (University of Washington), Ning Zhang (Washington University in St. Louis), Umar Iqbal (Washington University in St. Louis)

Large language models (LLMs) extended as systems, such as ChatGPT, have begun supporting third-party applications. These LLM apps leverage the de facto natural language-based automated execution paradigm of LLMs: that is, apps and their interactions are defined in natural language, provided access to user data, and allowed to freely interact with each other and the system. These LLM app ecosystems resemble the settings of earlier computing platforms, where there was insufficient isolation between apps and the system. Because third-party apps may not be trustworthy, and exacerbated by the imprecision of natural language interfaces, the current designs pose security and privacy risks for users. In this paper, we evaluate whether these issues can be addressed through execution isolation and what that isolation might look like in the context of LLM-based systems, where there are arbitrary natural language-based interactions between system components, between LLM and apps, and between apps. To that end, we propose IsolateGPT, a design architecture that demonstrates the feasibility of execution isolation and provides a blueprint for implementing isolation, in LLM-based systems. We evaluate IsolateGPT against a number of attacks and demonstrate that it protects against many security, privacy, and safety issues that exist in non-isolated LLM-based systems, without any loss of functionality. The performance overhead incurred by IsolateGPT to improve security is under 30% for three-quarters of tested queries.

View More Papers

Reinforcement Unlearning

Dayong Ye (University of Technology Sydney), Tianqing Zhu (City University of Macau), Congcong Zhu (City University of Macau), Derui Wang (CSIRO’s Data61), Kun Gao (University of Technology Sydney), Zewei Shi (CSIRO’s Data61), Sheng Shen (Torrens University Australia), Wanlei Zhou (City University of Macau), Minhui Xue (CSIRO's Data61)

Read More

Interventional Root Cause Analysis of Failures in Multi-Sensor Fusion...

Shuguang Wang (City University of Hong Kong), Qian Zhou (City University of Hong Kong), Kui Wu (University of Victoria), Jinghuai Deng (City University of Hong Kong), Dapeng Wu (City University of Hong Kong), Wei-Bin Lee (Information Security Center, Hon Hai Research Institute), Jianping Wang (City University of Hong Kong)

Read More

PowerRadio: Manipulate Sensor Measurement via Power GND Radiation

Yan Jiang (Zhejiang University), Xiaoyu Ji (Zhejiang University), Yancheng Jiang (Zhejiang University), Kai Wang (Zhejiang University), Chenren Xu (Peking University), Wenyuan Xu (Zhejiang University)

Read More