Licheng Pan (Zhejiang University), Yunsheng Lu (University of Chicago), Jiexi Liu (Alibaba Group), Jialing Tao (Alibaba Group), Haozhe Feng (Zhejiang University), Hui Xue (Alibaba Group), Zhixuan Chu (Zhejiang University), Kui Ren (Zhejiang University)

Uncovering the mechanisms behind "jailbreaks" in large language models (LLMs) is crucial for enhancing their safety and reliability, yet these mechanisms remain poorly understood.
Existing studies predominantly analyze jailbreak prompts by probing latent representations, often overlooking the causal relationships between interpretable prompt features and jailbreak occurrences. In this work, we propose textbf{Causal Analyst}, a framework that integrates LLMs into data-driven causal discovery to identify the direct causes of jailbreaks and leverage them for both attack and defense. We introduce a comprehensive dataset comprising 35k jailbreak attempts across seven LLMs, systematically constructed from 100 attack templates and 50 harmful queries, annotated with 37 meticulously designed human-readable prompt features.
By jointly training LLM-based prompt encoding and GNN-based causal graph learning, we reconstruct causal pathways linking prompt features to jailbreak responses. Our analysis reveals that specific features, such as "Positive Character" and "Number of Task Steps", act as direct causal drivers of jailbreaks. We demonstrate the practical utility of these insights through two applications: (1) a textbf{Jailbreaking Enhancer} that targets identified causal features to significantly boost attack success rates on public benchmarks, and (2) a textbf{Guardrail Advisor} that utilizes the learned causal graph to extract true malicious intent from obfuscated queries.
Extensive experiments, including baseline comparisons and causal structure validation, confirm the robustness of our causal analysis and its superiority over non-causal approaches.
Our results suggest that analyzing jailbreak features from a causal perspective is an effective and interpretable approach for improving LLM reliability. Our code is available at https://github.com/Master-PLC/Causal-Analyst.

View More Papers

On the Security Risks of Memory Adaptation and Augmentation...

Hocheol Nam (KAIST), Daehyun Lim (KAIST), Huancheng Zhou (Texas A&M University), Guofei Gu (Texas A&M University), Min Suk Kang (KAIST)

Read More

HyperMirage: Direct State Manipulation in Hybrid Virtual CPU Fuzzing

Manuel Andreas (Technical University of Munich), Fabian Specht (Technical University of Munich), Marius Momeu (Technical University of Munich)

Read More

Q-MLLM: Vector Quantization for Robust Multimodal Large Language Model...

Wei Zhao (Singapore Management University), Zhe Li (Singapore Management University), Yige Li (Singapore Management University), Jun Sun (Singapore Management University)

Read More