Licheng Pan (Zhejiang University), Yunsheng Lu (University of Chicago), Jiexi Liu (Alibaba Group), Jialing Tao (Alibaba Group), Haozhe Feng (Zhejiang University), Hui Xue (Alibaba Group), Zhixuan Chu (Zhejiang University), Kui Ren (Zhejiang University)

Uncovering the mechanisms behind "jailbreaks" in large language models (LLMs) is crucial for enhancing their safety and reliability, yet these mechanisms remain poorly understood.
Existing studies predominantly analyze jailbreak prompts by probing latent representations, often overlooking the causal relationships between interpretable prompt features and jailbreak occurrences. In this work, we propose textbf{Causal Analyst}, a framework that integrates LLMs into data-driven causal discovery to identify the direct causes of jailbreaks and leverage them for both attack and defense. We introduce a comprehensive dataset comprising 35k jailbreak attempts across seven LLMs, systematically constructed from 100 attack templates and 50 harmful queries, annotated with 37 meticulously designed human-readable prompt features.
By jointly training LLM-based prompt encoding and GNN-based causal graph learning, we reconstruct causal pathways linking prompt features to jailbreak responses. Our analysis reveals that specific features, such as "Positive Character" and "Number of Task Steps", act as direct causal drivers of jailbreaks. We demonstrate the practical utility of these insights through two applications: (1) a textbf{Jailbreaking Enhancer} that targets identified causal features to significantly boost attack success rates on public benchmarks, and (2) a textbf{Guardrail Advisor} that utilizes the learned causal graph to extract true malicious intent from obfuscated queries.
Extensive experiments, including baseline comparisons and causal structure validation, confirm the robustness of our causal analysis and its superiority over non-causal approaches.
Our results suggest that analyzing jailbreak features from a causal perspective is an effective and interpretable approach for improving LLM reliability. Our code is available at https://github.com/Master-PLC/Causal-Analyst.

View More Papers

DOM-XSS Detection via Webpage Interaction Fuzzing and URL Component...

Nuno Sabino (Carnegie Mellon University, Instituto Superior Técnico, Universidade de Lisboa, and Instituto de Telecomunicações), Darion Cassel (Carnegie Mellon University), Rui Abreu (Universidade do Porto, INESC-ID), Pedro Adão (Instituto Superior Técnico, Universidade de Lisboa, and Instituto de Telecomunicações), Lujo Bauer (Carnegie Mellon University), Limin Jia (Carnegie Mellon University)

Read More

Pogofuzz: Profile-Guided Optimization for Fuzzing (Registered Report)

Tobias Holl (Ruhr University Bochum), Leon Weiß (Ruhr University Bochum), Kevin Borgolte (Ruhr University Bochum)

Read More

UIEE: Secure and Efficient User-space Isolated Execution Environment for...

Huaiyu Yan (Southeast University), Zhen Ling (Southeast University), Xuandong Chen (Southeast University), Xinhui Shao (Southeast University, City University of Hong Kong), Yier Jin (University of Science and Technology of China), Haobo Li (Southeast University), Ming Yang (Southeast University), Ping Jiang (Southeast University), Junzhou Luo (Southeast University, Fuyao University of Science and Technology)

Read More