Xiaoyun xu (Radboud University), Shujian Yu (Vrije Universiteit Amsterdam), Zhuoran Liu (Radboud University), Stjepan Picek (Radboud University)

Vision Transformers (ViTs) have emerged as a fundamental architecture and serve as the backbone of modern vision-language models. Despite their impressive performance, ViTs exhibit notable vulnerability to evasion attacks, necessitating the development of specialized Adversarial Training (AT) strategies tailored to their unique architecture.
While a direct solution might involve applying existing AT methods to ViTs, our analysis reveals significant incompatibilities, particularly with state-of-the-art (SOTA) approaches such as Generalist (CVPR 2023) and DBAT (USENIX Security 2024).
This paper presents a systematic investigation of adversarial robustness in ViTs and provides a novel theoretical Mutual Information (MI) analysis in its autoencoder-based self-supervised pre-training.
Specifically, we show that MI between the adversarial example and its latent representation in ViT-based autoencoders should be constrained via derived MI bounds.
Building on this insight, we propose a self-supervised AT method, MIMIR, that employs an MI penalty to facilitate adversarial pre-training by masked image modeling with autoencoders.
Extensive experiments on CIFAR-10, Tiny-ImageNet, and ImageNet-1K show that MIMIR can consistently provide improved natural and robust accuracy, where MIMIR outperforms SOTA AT results on ImageNet-1K.
Notably, MIMIR demonstrates superior robustness against unforeseen attacks and common corruption data and can also withstand adaptive attacks where the adversary possesses full knowledge of the defense mechanism.
Our code and trained models are publicly available at: https://github.com/xiaoyunxxy/MIMIR.

View More Papers

Validity Is Not Enough: Uncovering the Security Pitfall in...

Di Zhai (Beijing Jiaotong University), Jiashuo Zhang (Peking University), Jianbo Gao (Beijing Jiaotong University), Tianhao Liu (Beijing Jiaotong University), Tao Zhang (Beijing Jiaotong University), Jian Wang (Beijing Jiaotong University), Jiqiang Liu (Beijing Jiaotong University)

Read More

Janus: Enabling Expressive and Efficient ACLs in High-speed RDMA...

Ziteng Chen (Southeast University), Menghao Zhang (Beihang University), Jiahao Cao (Tsinghua University & Quan Cheng Laboratory), Xuzheng Chen (Zhejiang University), Qiyang Peng (Beihang University), Shicheng Wang (Unaffiliated), Guanyu Li (Unaffiliated), Mingwei Xu (Quan Cheng Laboratory & Tsinghua University & Southeast University)

Read More

When Focus Enhances Utility: Target Range LDP Frequency Estimation...

Bo Jiang (TikTok Inc.), Wanrong Zhang (TikTok Inc.), Donghang Lu (TikTok Inc.), Jian Du (TikTok Inc.), Qiang Yan (TikTok Inc.)

Read More