Linzhi Chen (ShanghaiTech University), Yang Sun (Independent Researcher), Hongru Wei (ShanghaiTech University), Yuqi Chen (ShanghaiTech University)

Low-Rank Adaptation (LoRA) has emerged as an efficient method for fine-tuning large language models (LLMs) and is widely adopted within the open-source community. However, the decentralized dissemination of LoRA adapters through platforms such as Hugging Face introduces novel security vulnerabilities: malicious adapters can be easily distributed and evade conventional oversight mechanisms. Despite these risks, backdoor attacks targeting LoRA-based fine-tuning remain relatively underexplored. Existing backdoor attack strategies are ill-suited to this setting, as they often rely on inaccessible training data, fail to account for the structural properties unique to LoRA, or suffer from high false trigger rates (FTR), thereby compromising their stealth.
To address these challenges, we propose Causal-Guided Detoxify Backdoor Attack (CBA), a novel backdoor attack framework specifically designed for open-weight LoRA models. CBA operates without access to original training data and achieves high stealth through two key innovations: (1) a coverage-guided data generation pipeline that synthesizes task-aligned inputs via behavioral exploration, and (2) a causal-guided detoxification strategy that merges poisoned and clean adapters by preserving task-critical neurons.
Unlike prior approaches, CBA enables post-training control over attack intensity through causal influence-based weight allocation, eliminating the need for repeated retraining. Evaluated across six LoRA models, CBA achieves high attack success rates while reducing FTR by 50–70% compared to baseline methods. Furthermore, it demonstrates enhanced resistance to state-of-the-art backdoor defenses, highlighting its stealth and robustness.

View More Papers

Analysing Privacy Risks in Children’s Educational Apps in Australia

Sicheng Jin (University of New South Wales), Rahat Masood (University of New South Wales), Jung-Sook Lee (University of New South Wales), Hye-Young (Helen) Paik (University of New South Wales)

Read More

Consensus in the Known Participation Model with Byzantine Faults...

Chenxu Wang (Shandong University), Sisi Duan (Tsinghua University), Minghui Xu (Shandong University), Feng Li (Shandong University), Xiuzhen Cheng (Shandong University)

Read More

Understanding the Stealthy BGP Hijacking Risk in the ROV...

Yihao Chen (DCST & BNRist & State Key Laboratory of Internet Architecture, Tsinghua University; Zhongguancun Laboratory), Qi Li (INSC & State Key Laboratory of Internet Architecture, Tsinghua University; Zhongguancun Laboratory), Ke Xu (DCST & State Key Laboratory of Internet Architecture, Tsinghua University; Zhongguancun Laboratory), Zhuotao Liu (INSC & State Key Laboratory of Internet Architecture, Tsinghua…

Read More