Yichen Gong (Tsinghua University), Delong Ran (Tsinghua University), Xinlei He (Hong Kong University of Science and Technology (Guangzhou)), Tianshuo Cong (Tsinghua University), Anyu Wang (Tsinghua University), Xiaoyun Wang (Tsinghua University)

The safety alignment of Large Language Models (LLMs) is crucial to prevent unsafe content that violates human values.
To ensure this, it is essential to evaluate the robustness of their alignment against diverse malicious attacks.
However, the lack of a large-scale, unified measurement framework hinders a comprehensive understanding of potential vulnerabilities.
To fill this gap, this paper presents the first comprehensive evaluation of existing and newly proposed safety misalignment methods for LLMs. Specifically, we investigate four research questions: (1) evaluating the robustness of LLMs with different alignment strategies, (2) identifying the most effective misalignment method, (3) determining key factors that influence misalignment effectiveness, and (4) exploring various defenses.
The safety misalignment attacks in our paper include system-prompt modification, model fine-tuning, and model editing.
Our findings show that Supervised Fine-Tuning is the most potent attack but requires harmful model responses.
In contrast, our novel Self-Supervised Representation Attack (SSRA) achieves significant misalignment without harmful responses.
We also examine defensive mechanisms such as safety data filter, model detoxification, and our proposed Self-Supervised Representation Defense (SSRD), demonstrating that SSRD can effectively re-align the model.
In conclusion, our unified safety alignment evaluation framework empirically highlights the fragility of the safety alignment of LLMs.

View More Papers

Secure Transformer Inference Made Non-interactive

Jiawen Zhang (Zhejiang University), Xinpeng Yang (Zhejiang University), Lipeng He (University of Waterloo), Kejia Chen (Zhejiang University), Wen-jie Lu (Zhejiang University), Yinghao Wang (Zhejiang University), Xiaoyang Hou (Zhejiang University), Jian Liu (Zhejiang University), Kui Ren (Zhejiang University), Xiaohu Yang (Zhejiang University)

Read More

BARBIE: Robust Backdoor Detection Based on Latent Separability

Hanlei Zhang (Zhejiang University), Yijie Bai (Zhejiang University), Yanjiao Chen (Zhejiang University), Zhongming Ma (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More

WAVEN: WebAssembly Memory Virtualization for Enclaves

Weili Wang (Southern University of Science and Technology), Honghan Ji (ByteDance Inc.), Peixuan He (ByteDance Inc.), Yao Zhang (ByteDance Inc.), Ye Wu (ByteDance Inc.), Yinqian Zhang (Southern University of Science and Technology)

Read More

SIGuard: Guarding Secure Inference with Post Data Privacy

Xinqian Wang (RMIT University), Xiaoning Liu (RMIT University), Shangqi Lai (CSIRO Data61), Xun Yi (RMIT University), Xingliang Yuan (University of Melbourne)

Read More