Songze Li (Southeast University), Jiameng Cheng (Southeast University), Yiming Li (Nanyang Technological University), Xiaojun Jia (Nanyang Technological University), Dacheng Tao (Nanyang Technological University)

Multimodal large language models (MLLMs), which combine text with other modalities, such as images, have demonstrated powerful capabilities and are increasingly adopted in real-world commercial systems. However, their growing accessibility also raises concerns about misuse, such as generating harmful content. To mitigate these risks, alignment techniques are commonly applied to align model behavior with human values. Despite these efforts, recent studies have shown that jailbreak attacks can circumvent alignment and elicit unsafe outputs. Currently, most existing jailbreak methods are tailored for open-source models and exhibit limited effectiveness against commercial MLLM-integrated systems, which often employ additional filters. These filters can detect and prevent malicious input and output content, significantly reducing jailbreak threats.

In this paper, we reveal that the success of these safety filters heavily relies on a critical assumption that malicious content must be explicitly visible in either the input or the output. This assumption, while often valid for traditional LLM-integrated systems, breaks down in MLLM-integrated systems, where attackers can leverage multiple modalities to conceal adversarial intent, leading to a false sense of security in existing MLLM-integrated systems. To challenge this assumption, we propose texttt{Odysseus}, a novel jailbreak paradigm that introduces dual steganography to covertly embed malicious queries and responses into benign-looking images. Our method proceeds through four stages: textbf{(1)} malicious query encoding, textbf{(2)} steganography embedding, textbf{(3)} model interaction, and textbf{(4)} response extraction. We first encode the adversary-specified malicious prompt into binary matrices and embed them into images using a steganography model. The modified image will be fed into the victim MLLM-integrated system. We encourage the victim MLLM-integrated system to implant the generated illegitimate content into a carrier image (via steganography), which will be used for attackers to decode the hidden response locally. Extensive experiments on benchmark datasets demonstrate that our texttt{Odysseus} successfully jailbreaks several pioneering and realistic MLLM-integrated systems, including GPT-4o, Gemini-2.0-pro, Gemini-2.0-flash, and Grok-3, achieving up to 99% attack success rate. It exposes a fundamental blind spot in existing defenses, and calls for rethinking cross-modal security in MLLM-integrated systems.

View More Papers

CRISP: An Efficient Cryptographic Framework for ML Inference Against...

Xiaoyu Fang (Beijing University of Posts and Telecommunications), Shihui Zheng (Beijing University of Posts and Telecommunications), Lize Gu (Beijing University of Posts and Telecommunications)

Read More

Towards Effective Prompt Stealing Attack against Text-to-Image Diffusion Models

Shiqian Zhao (Nanyang Technological University), Chong Wang (Nanyang Technological University), Yiming Li (Nanyang Technological University), Yihao Huang (NUS), Wenjie Qu (NUS), Siew-Kei Lam (Nanyang Technological University), Yi Xie (Tsinghua University), Kangjie Chen (Nanyang Technological University), Jie Zhang (CFAR and IHPC, A*STAR, Singapore), Tianwei Zhang (Nanyang Technological University)

Read More

Achieving Zen: Combining Mathematical and Programmatic Deep Learning Model...

David Oygenblik (Georgia Institute of Technology), Dinko Dermendzhiev (Georgia Institute of Technology), Filippos Sofias (Georgia Institute of Technology), Mingxuan Yao (Georgia Institute of Technology), Haichuan Xu (Georgia Institute of Technology), Runze Zhang (Georgia Institute of Technology), Jeman Park (Kyung Hee University), Amit Kumar Sikder (Iowa State University), Brendan Saltaformaggio (Georgia Institute of Technology)

Read More