Yichen Gong (Tsinghua University), Delong Ran (Tsinghua University), Xinlei He (Hong Kong University of Science and Technology (Guangzhou)), Tianshuo Cong (Tsinghua University), Anyu Wang (Tsinghua University), Xiaoyun Wang (Tsinghua University)

The safety alignment of Large Language Models (LLMs) is crucial to prevent unsafe content that violates human values.
To ensure this, it is essential to evaluate the robustness of their alignment against diverse malicious attacks.
However, the lack of a large-scale, unified measurement framework hinders a comprehensive understanding of potential vulnerabilities.
To fill this gap, this paper presents the first comprehensive evaluation of existing and newly proposed safety misalignment methods for LLMs. Specifically, we investigate four research questions: (1) evaluating the robustness of LLMs with different alignment strategies, (2) identifying the most effective misalignment method, (3) determining key factors that influence misalignment effectiveness, and (4) exploring various defenses.
The safety misalignment attacks in our paper include system-prompt modification, model fine-tuning, and model editing.
Our findings show that Supervised Fine-Tuning is the most potent attack but requires harmful model responses.
In contrast, our novel Self-Supervised Representation Attack (SSRA) achieves significant misalignment without harmful responses.
We also examine defensive mechanisms such as safety data filter, model detoxification, and our proposed Self-Supervised Representation Defense (SSRD), demonstrating that SSRD can effectively re-align the model.
In conclusion, our unified safety alignment evaluation framework empirically highlights the fragility of the safety alignment of LLMs.

View More Papers

DeFiIntel: A Dataset Bridging On-Chain and Off-Chain Data for...

Iori Suzuki (Graduate School of Environment and Information Sciences, Yokohama National University), Yin Minn Pa Pa (Institute of Advanced Sciences, Yokohama National University), Nguyen Thi Van Anh (Institute of Advanced Sciences, Yokohama National University), Katsunari Yoshioka (Graduate School of Environment and Information Sciences, Yokohama National University)

Read More

Revisiting Concept Drift in Windows Malware Detection: Adaptation to...

Adrian Shuai Li (Purdue University), Arun Iyengar (Intelligent Data Management and Analytics, LLC), Ashish Kundu (Cisco Research), Elisa Bertino (Purdue University)

Read More

Non-intrusive and Unconstrained Keystroke Inference in VR Platforms via...

Tao Ni (City University of Hong Kong), Yuefeng Du (City University of Hong Kong), Qingchuan Zhao (City University of Hong Kong), Cong Wang (City University of Hong Kong)

Read More

PhantomLiDAR: Cross-modality Signal Injection Attacks against LiDAR

Zizhi Jin (Zhejiang University), Qinhong Jiang (Zhejiang University), Xuancun Lu (Zhejiang University), Chen Yan (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More