Wei Zhao (Singapore Management University), Zhe Li (Singapore Management University), Yige Li (Singapore Management University), Jun Sun (Singapore Management University)

Multimodal Large Language Models (MLLMs) have demonstrated impressive capabilities in cross-modal understanding, but remain vulnerable to adversarial attacks through visual inputs despite robust textual safety mechanisms. These vulnerabilities arise from two core weaknesses: the continuous nature of visual representations, which allows for gradient-based attacks, and the inadequate transfer of text-based safety mechanisms to visual content. We introduce Q-MLLM, a novel architecture that integrates two-level vector quantization to create a discrete bottleneck against adversarial attacks while preserving multimodal reasoning capabilities. By discretizing visual representations at both pixel-patch and semantic levels, Q-MLLM blocks attack pathways and bridges the cross-modal safety alignment gap. Our two-stage training methodology ensures robust learning while maintaining model utility. Experiments demonstrate that Q-MLLM achieves significantly better defense success rate against both jailbreak attacks and toxic image attacks than existing approaches. Notably, Q-MLLM achieves perfect defense success rate (100%) against jailbreak attacks except in one arguable case, while maintaining competitive performance on multiple utility benchmarks with minimal inference overhead. This work establishes vector quantization as an effective defense mechanism for secure multimodal AI systems without requiring expensive safety-specific fine-tuning or detection overhead.

View More Papers

Select-Then-Compute: Encrypted Label Selection and Analytics over Distributed Datasets...

Nirajan Koirala (University of Notre Dame), Seunghun Paik (Hanyang University), Sam Martin (University of Notre Dame), Helena Berens (University of Notre Dame), Tasha Januszewicz (University of Notre Dame), Jonathan Takeshita (Old Dominion University), Jae Hong Seo (Hanyang University), Taeho Jung (University of Notre Dame)

Read More

ReFuzz: Reusing Tests for Processor Fuzzing with Contextual Bandits

Chen Chen (Texas A&M University, USA), Zaiyan Xu (Texas A&M University, USA), Mohamadreza Rostami (Technische Universitat Darmstadt, Germany), David Liu (Texas A&M University, USA), Dileep Kalathil (Texas A&M University, USA), Ahmad-Reza Sadeghi (Technische Universitat Darmstadt, Germany), Jeyavijayan (JV) Rajendran (Texas A&M University, USA)

Read More

Minding the Gap: Bridging Causal Disconnects in System Provenance

Hanke Kimm (Stony Brook University, NY, USA), Sagar Mishra (Stony Brook University, NY, USA), R. Sekar (Stony Brook University, NY, USA)

Read More