Gelei Deng, Yi Liu (Nanyang Technological University), Yuekang Li (The University of New South Wales), Wang Kailong(Huazhong University of Science and Technology), Tianwei Zhang, Yang Liu (Nanyang Technological University)

Large Language Models (LLMs) have gained immense popularity and are being increasingly applied in various domains. Consequently, ensuring the security of these models is of paramount importance. Jailbreak attacks, which manipulate LLMs to generate malicious content, are recognized as a significant vulnerability. While existing research has predominantly focused on direct jailbreak attacks on LLMs, there has been limited exploration of indirect methods. The integration of various plugins into LLMs, notably Retrieval Augmented Generation (RAG), which enables LLMs to incorporate external knowledge bases into their response generation such as GPTs, introduces new avenues for indirect jailbreak attacks.

To fill this gap, we investigate indirect jailbreak attacks on LLMs, particularly GPTs, introducing a novel attack vector named Retrieval Augmented Generation Poisoning. This method, PANDORA, exploits the synergy between LLMs and RAG through prompt manipulation to generate unexpected responses. PANDORA uses maliciously crafted content to influence the RAG process, effectively initiating jailbreak attacks. Our preliminary tests show that PANDORA successfully conducts jailbreak attacks in four different scenarios, achieving higher success rates than direct attacks, with 64.3% for GPT-3.5 and 34.8% for GPT-4.

View More Papers

CAN-MIRGU: A Comprehensive CAN Bus Attack Dataset from Moving...

Sampath Rajapaksha, Harsha Kalutarage (Robert Gordon University, UK), Garikayi Madzudzo (Horiba Mira Ltd, UK), Andrei Petrovski (Robert Gordon University, UK), M.Omar Al-Kadri (University of Doha for Science and Technology)

Read More

BGP-iSec: Improved Security of Internet Routing Against Post-ROV Attacks

Cameron Morris (University of Connecticut), Amir Herzberg (University of Connecticut), Bing Wang (University of Connecticut), Samuel Secondo (University of Connecticut)

Read More

Compromising Industrial Processes using Web-Based Programmable Logic Controller Malware

Ryan Pickren (Georgia Institute of Technology), Tohid Shekari (Georgia Institute of Technology), Saman Zonouz (Georgia Institute of Technology), Raheem Beyah (Georgia Institute of Technology)

Read More

MadRadar: A Black-Box Physical Layer Attack Framework on mmWave...

David Hunt (Duke University), Kristen Angell (Duke University), Zhenzhou Qi (Duke University), Tingjun Chen (Duke University), Miroslav Pajic (Duke University)

Read More