Gelei Deng (Nanyang Technological University), Yi Liu (Nanyang Technological University), Yuekang Li (University of New South Wales), Kailong Wang (Huazhong University of Science and Technology), Ying Zhang (Virginia Tech), Zefeng Li (Nanyang Technological University), Haoyu Wang (Huazhong University of Science and Technology), Tianwei Zhang (Nanyang Technological University), Yang Liu (Nanyang Technological University)

Large language models (LLMs), such as chatbots, have made significant strides in various fields but remain vulnerable to jailbreak attacks, which aim to elicit inappropriate responses. Despite efforts to identify these weaknesses, current strategies are ineffective against mainstream LLM chatbots, mainly due to undisclosed defensive measures by service providers. Our paper introduces MASTERKEY, a framework exploring the dynamics of jailbreak attacks and countermeasures. We present a novel method based on time-based characteristics to dissect LLM chatbot defenses. This technique, inspired by time-based SQL injection, uncovers the workings of these defenses and demonstrates a proof-of-concept attack on several LLM chatbots.

Additionally, MASTERKEY features an innovative approach for automatically generating jailbreak prompts that target well-defended LLM chatbots. By fine-tuning an LLM with jailbreak prompts, we create attacks with a 21.58% success rate, significantly higher than the 7.33% achieved by existing methods. We have informed service providers of these findings, highlighting the urgent need for stronger defenses. This work not only reveals vulnerabilities in LLMs but also underscores the importance of robust defenses against such attacks.

View More Papers

GraphGuard: Detecting and Counteracting Training Data Misuse in Graph...

Bang Wu (CSIRO's Data61/Monash University), He Zhang (Monash University), Xiangwen Yang (Monash University), Shuo Wang (CSIRO's Data61/Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Shirui Pan (Griffith University), Xingliang Yuan (Monash University)

Read More

CAN-MIRGU: A Comprehensive CAN Bus Attack Dataset from Moving...

Sampath Rajapaksha, Harsha Kalutarage (Robert Gordon University, UK), Garikayi Madzudzo (Horiba Mira Ltd, UK), Andrei Petrovski (Robert Gordon University, UK), M.Omar Al-Kadri (University of Doha for Science and Technology)

Read More

MPCDiff: Testing and Repairing MPC-Hardened Deep Learning Models

Qi Pang (Carnegie Mellon University), Yuanyuan Yuan (HKUST), Shuai Wang (HKUST)

Read More

WIP: Towards Practical LiDAR Spoofing Attack against Vehicles Driving...

Ryo Suzuki (Keio University), Takami Sato (University of California, Irvine), Yuki Hayakawa, Kazuma Ikeda, Ozora Sako, Rokuto Nagata (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

Read More