Yichen Gong (Tsinghua University), Delong Ran (Tsinghua University), Xinlei He (Hong Kong University of Science and Technology (Guangzhou)), Tianshuo Cong (Tsinghua University), Anyu Wang (Tsinghua University), Xiaoyun Wang (Tsinghua University)

The safety alignment of Large Language Models (LLMs) is crucial to prevent unsafe content that violates human values.
To ensure this, it is essential to evaluate the robustness of their alignment against diverse malicious attacks.
However, the lack of a large-scale, unified measurement framework hinders a comprehensive understanding of potential vulnerabilities.
To fill this gap, this paper presents the first comprehensive evaluation of existing and newly proposed safety misalignment methods for LLMs. Specifically, we investigate four research questions: (1) evaluating the robustness of LLMs with different alignment strategies, (2) identifying the most effective misalignment method, (3) determining key factors that influence misalignment effectiveness, and (4) exploring various defenses.
The safety misalignment attacks in our paper include system-prompt modification, model fine-tuning, and model editing.
Our findings show that Supervised Fine-Tuning is the most potent attack but requires harmful model responses.
In contrast, our novel Self-Supervised Representation Attack (SSRA) achieves significant misalignment without harmful responses.
We also examine defensive mechanisms such as safety data filter, model detoxification, and our proposed Self-Supervised Representation Defense (SSRD), demonstrating that SSRD can effectively re-align the model.
In conclusion, our unified safety alignment evaluation framework empirically highlights the fragility of the safety alignment of LLMs.

View More Papers

Automatic Library Fuzzing through API Relation Evolvement

Jiayi Lin (The University of Hong Kong), Qingyu Zhang (The University of Hong Kong), Junzhe Li (The University of Hong Kong), Chenxin Sun (The University of Hong Kong), Hao Zhou (The Hong Kong Polytechnic University), Changhua Luo (The University of Hong Kong), Chenxiong Qian (The University of Hong Kong)

Read More

Heimdall: Towards Risk-Aware Network Management Outsourcing

Yuejie Wang (Peking University), Qiutong Men (New York University), Yongting Chen (New York University Shanghai), Jiajin Liu (New York University Shanghai), Gengyu Chen (Carnegie Mellon University), Ying Zhang (Meta), Guyue Liu (Peking University), Vyas Sekar (Carnegie Mellon University)

Read More

UI-CTX: Understanding UI Behaviors with Code Contexts for Mobile...

Jiawei Li (Beihang University & National University of Singapore), Jiahao Liu (National University of Singapore), Jian Mao (Beihang University), Jun Zeng (National University of Singapore), Zhenkai Liang (National University of Singapore)

Read More

BULKHEAD: Secure, Scalable, and Efficient Kernel Compartmentalization with PKS

Yinggang Guo (State Key Laboratory for Novel Software Technology, Nanjing University; University of Minnesota), Zicheng Wang (State Key Laboratory for Novel Software Technology, Nanjing University), Weiheng Bai (University of Minnesota), Qingkai Zeng (State Key Laboratory for Novel Software Technology, Nanjing University), Kangjie Lu (University of Minnesota)

Read More