Yinan Zhong (Zhejiang University), Qianhao Miao (Zhejiang University), Yanjiao Chen (Zhejiang University), Jiangyi Deng (Zhejiang University), Yushi Cheng (Zhejiang University), Wenyuan Xu (Zhejiang University)

Large Language Models (LLMs) have been integrated into many applications (e.g., web agents) to perform more sophisticated tasks. However, LLM-empowered applications are vulnerable to Indirect Prompt Injection (IPI) attacks, where instructions are injected via untrustworthy external data sources. This paper presents Rennervate, a defense framework to detect and prevent IPI attacks. Rennervate leverages attention features to detect the covert injection at a fine-grained token level, enabling precise sanitization that neutralizes IPI attacks while maintaining LLM functionalities. Specifically, the token-level detector is materialized with a 2-step attentive pooling mechanism, which aggregates attention heads and response tokens for IPI detection and sanitization. Moreover, we establish a fine-grained IPI dataset, FIPI, to be open-sourced to support further research. Extensive experiments verify that Rennervate outperforms 15 commercial and academic IPI defense methods, achieving high precision on 5 LLMs and 6 datasets. We also demonstrate that Rennervate is transferable to unseen attacks and robust against adaptive adversaries.

View More Papers

Work-in-progress: The Case for LLM-Enhanced Backward Tracking

Jiahui Wang (Zhejiang University), Xiangmin Shen (Hofstra University), Zhengkai Wang, Zhenyuan LI (Zhejiang University)

Read More

Dilipa: Making Micropatches from Edits to Lifted C

Henny Sipma, Ricardo Baratto, Ben Karel, Michael Gordon (Aarno Labs)

Read More

SVDefense: Effective Defense against Gradient Inversion Attacks via Singular...

Chenxiang Luo (City University of Hong Kong), David Yau (Singapore University of Technology and Design), Qun Song (City University of Hong Kong)

Read More