Wei Zhao (Singapore Management University), Zhe Li (Singapore Management University), Yige Li (Singapore Management University), Jun Sun (Singapore Management University)

Multimodal Large Language Models (MLLMs) have demonstrated impressive capabilities in cross-modal understanding, but remain vulnerable to adversarial attacks through visual inputs despite robust textual safety mechanisms. These vulnerabilities arise from two core weaknesses: the continuous nature of visual representations, which allows for gradient-based attacks, and the inadequate transfer of text-based safety mechanisms to visual content. We introduce Q-MLLM, a novel architecture that integrates two-level vector quantization to create a discrete bottleneck against adversarial attacks while preserving multimodal reasoning capabilities. By discretizing visual representations at both pixel-patch and semantic levels, Q-MLLM blocks attack pathways and bridges the cross-modal safety alignment gap. Our two-stage training methodology ensures robust learning while maintaining model utility. Experiments demonstrate that Q-MLLM achieves significantly better defense success rate against both jailbreak attacks and toxic image attacks than existing approaches. Notably, Q-MLLM achieves perfect defense success rate (100%) against jailbreak attacks except in one arguable case, while maintaining competitive performance on multiple utility benchmarks with minimal inference overhead. This work establishes vector quantization as an effective defense mechanism for secure multimodal AI systems without requiring expensive safety-specific fine-tuning or detection overhead.

View More Papers

DNN Latency Sequencing: Extracting DNN Architectures from Intel SGX...

Minkyung Park (University of Texas at Dallas), Zelun Kong (University of Texas at Dallas), Dave (Jing) Tian (Purdue University), Z. Berkay Celik (Purdue University), Chung Hwan Kim (University of Texas at Dallas)

Read More

The 1-RTT Penalty: Quantifying the Recurring Cost of PQC...

Young Eun Kwon, Ji Won Yoon (Korea University)

Read More

From Obfuscated to Obvious: A Comprehensive JavaScript Deobfuscation Tool...

Dongchao Zhou (Beijing University of Post and Telecommunications, QI-ANXIN Technology Research Institute), Lingyun Ying (QI-ANXIN Technology Research Institute), Huajun Chai (QI-ANXIN Technology Research Institute), Dongbin Wang (Beijing University of Post and Telecommunications)

Read More