Jiawen Shi (Huazhong University of Science and Technology), Zenghui Yuan (Huazhong University of Science and Technology), Guiyao Tie (Huazhong University of Science and Technology), Pan Zhou (Huazhong University of Science and Technology), Neil Zhenqiang Gong (Duke University), Lichao Sun (Lehigh University)

Tool selection is a key component of LLM agents. A popular approach follows a two-step process - retrieval and selection - to pick the most appropriate tool from a tool library for a given task. In this work, we introduce ToolHijacker, a novel prompt injection attack targeting tool selection in no-box scenarios. ToolHijacker injects a malicious tool document into the tool library to manipulate the LLM agent’s tool selection process, compelling it to consistently choose the attacker’s malicious tool for an attacker-chosen target task. Specifically, we formulate the crafting of such tool documents as an optimization problem and propose a two-phase optimization strategy to solve it. Our extensive experimental evaluation shows that ToolHijacker is highly effective, significantly outperforming existing manual-based and automated prompt injection attacks when applied to tool selection. Moreover, we explore various defenses, including prevention-based defenses (StruQ and SecAlign) and detection-based defenses (known-answer detection, DataSentinel, perplexity detection, and perplexity windowed detection). Our experimental results indicate that these defenses are insufficient, highlighting the urgent need for developing new defense strategies.

View More Papers

vSim: Semantics-Aware Value Extraction for Efficient Binary Code Similarity...

Huaijin Wang (The Ohio State University), Zhiqiang Lin (The Ohio State University)

Read More

WIP: Runtime Consistency Enforcement Between SBOM and Software Execution

Yuta Shimamoto (Okayama University, Okayama, Japan), Hiroyuki Uekawa (NTT Social Informatics Laboratories, Tokyo, Japan), Mitsuaki Akiyama (NTT Social Informatics Laboratories, Tokyo, Japan), Toshihiro Yamauchi (Okayama University, Okayama, Japan)

Read More

PriMod4AI: Lifecycle-Aware Privacy Threat Modeling for AI Systems using...

Gautam Savaliya (Deggendorf Institute of Technology, Germany), Robert Aufschlager (Deggendorf Institute of Technology, Germany), Abhishek Subedi (Deggendorf Institute of Technology, Germany), Michael Heigl (Deggendorf Institute of Technology, Germany), Martin Schramm (Deggendorf Institute of Technology, Germany)

Read More