Zichuan Li (University of Illinois Urbana-Champaign), Jian Cui (University of Illinois Urbana-Champaign), Xiaojing Liao (University of Illinois Urbana-Champaign), Luyi Xing (University of Illinois Urbana-Champaign)
Large Language Model (LLM) agents are autonomous systems powered by LLMs, capable of reasoning and planning to solve problems by leveraging a set of tools. However, the integration of multiple tools in LLM agents introduces challenges in securely managing tools, ensuring their compatibility, handling dependency relationships, and protecting control flows within LLM agent's task workflows. In this paper, we present the first systematic security analysis of task control flows in multi-tool-enabled LLM agents. We identify a novel threat, Cross-Tool Harvesting and Polluting (XTHP), which includes multiple attack vectors to first hijack the normal control flows of agent tasks, and then collect and pollute confidential or private information within LLM agent systems. To understand the impact of this threat, we developed Chord, a dynamic scanning tool designed to automatically detect real-world agent tools susceptible to XTHP attacks. Our evaluation of 66 real-world tools from two major LLM agent development frameworks, LangChain and LlamaIndex, revealed that 75% are vulnerable to XTHP attacks, highlighting the prevalence of this threat.