Zichuan Li (University of Illinois Urbana-Champaign), Jian Cui (University of Illinois Urbana-Champaign), Xiaojing Liao (University of Illinois Urbana-Champaign), Luyi Xing (University of Illinois Urbana-Champaign)

Large Language Model (LLM) agents are autonomous systems powered by LLMs, capable of reasoning and planning to solve problems by leveraging a set of tools. However, the integration of multiple tools in LLM agents introduces challenges in securely managing tools, ensuring their compatibility, handling dependency relationships, and protecting control flows within LLM agent's task workflows. In this paper, we present the first systematic security analysis of task control flows in multi-tool-enabled LLM agents. We identify a novel threat, Cross-Tool Harvesting and Polluting (XTHP), which includes multiple attack vectors to first hijack the normal control flows of agent tasks, and then collect and pollute confidential or private information within LLM agent systems. To understand the impact of this threat, we developed Chord, a dynamic scanning tool designed to automatically detect real-world agent tools susceptible to XTHP attacks. Our evaluation of 66 real-world tools from two major LLM agent development frameworks, LangChain and Llama-Index, revealed that 75% are vulnerable to XTHP attacks, highlighting the prevalence of this threat.

View More Papers

RT-Fuzzer: Task Driven Fuzzing of Real Time Operating System...

Abraham Clements, Abel Gomez Rivera (Sandia National Laboratories), Richard Jiayang Liu, Kirill Levchenko (University of Illinois Urbana-Champaign), Rick Kennell (Purdue University), Gabriela Ciocarlie (The Cybersecurity Manufacturing Innovation Institute and Stevens Institute of Technology) 

Read More

Accurate Identification of the Vulnerability-Introducing Commit based on Differential...

Qixuan Guo (Beijing Jiaotong University), Yongzhong He (Beijing Jiaotong University)

Read More

Bangr: Binary Ninja + angr

Kevan Baker, Daniel R. Tauritz, Samuel Mulder (Auburn University)

Read More