Yichen Gong (Tsinghua University), Delong Ran (Tsinghua University), Xinlei He (Hong Kong University of Science and Technology (Guangzhou)), Tianshuo Cong (Tsinghua University), Anyu Wang (Tsinghua University), Xiaoyun Wang (Tsinghua University)

The safety alignment of Large Language Models (LLMs) is crucial to prevent unsafe content that violates human values.
To ensure this, it is essential to evaluate the robustness of their alignment against diverse malicious attacks.
However, the lack of a large-scale, unified measurement framework hinders a comprehensive understanding of potential vulnerabilities.
To fill this gap, this paper presents the first comprehensive evaluation of existing and newly proposed safety misalignment methods for LLMs. Specifically, we investigate four research questions: (1) evaluating the robustness of LLMs with different alignment strategies, (2) identifying the most effective misalignment method, (3) determining key factors that influence misalignment effectiveness, and (4) exploring various defenses.
The safety misalignment attacks in our paper include system-prompt modification, model fine-tuning, and model editing.
Our findings show that Supervised Fine-Tuning is the most potent attack but requires harmful model responses.
In contrast, our novel Self-Supervised Representation Attack (SSRA) achieves significant misalignment without harmful responses.
We also examine defensive mechanisms such as safety data filter, model detoxification, and our proposed Self-Supervised Representation Defense (SSRD), demonstrating that SSRD can effectively re-align the model.
In conclusion, our unified safety alignment evaluation framework empirically highlights the fragility of the safety alignment of LLMs.

View More Papers

Victim-Centred Abuse Investigations and Defenses for Social Media Platforms

Zaid Hakami (Florida International University and Jazan University), Ashfaq Ali Shafin (Florida International University), Peter J. Clarke (Florida International University), Niki Pissinou (Florida International University), and Bogdan Carbunar (Florida International University)

Read More

Privacy Preserved Integrated Big Data Analytics Framework Using Federated...

Sarah Kaleem (Prince Sultan University, PSU) Awais Ahmad (Imam Mohammad Ibn Saud Islamic University, IMSIU), Muhammad Babar (Prince Sultan University, PSU), Goutham Reddy Alavalapati (University of Illinois, Springfield)

Read More

AegisSat: A Satellite Cybersecurity Testbed

Roee Idan, Roy Peled, Aviel Ben Siman Tov, Eli Markus, Boris Zadov, Ofir Chodeda, Yohai Fadida (Ben Gurion University of the Negev), Oliver Holschke, Jan Plachy (T-Labs (Research & Innovation)), Yuval Elovici, Asaf Shabtai (Ben Gurion University of the Negev)

Read More