Tian Dong (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Guoxing Chen (Shanghai Jiao Tong University), Rayne Holland (CSIRO's Data61), Yan Meng (Shanghai Jiao Tong University), Shaofeng Li (Southeast University), Zhen Liu (Shanghai Jiao Tong University), Haojin Zhu (Shanghai Jiao Tong University)

Open-source Large Language Models (LLMs) have recently gained popularity because of their comparable performance to proprietary LLMs. To efficiently fulfill domain-specialized tasks, open-source LLMs can be refined, without expensive accelerators, using low-rank adapters. However, it is still unknown whether low-rank adapters can be exploited to control LLMs. To address this gap, we demonstrate that an infected adapter can induce, on specific triggers, an LLM to output content defined by an adversary and to even maliciously use tools. To train a Trojan adapter, we propose two novel attacks, POLISHED and FUSION, that improve over prior approaches. POLISHED uses a superior LLM to align naïvely poisoned data based on our insight that it can better inject poisoning knowledge during training. In contrast, FUSION leverages a novel over-poisoning procedure to transform a benign adapter into a malicious one by magnifying the attention between trigger and target in model weights. In our experiments, we first conduct two case studies to demonstrate that a compromised LLM agent can use malware to control the system (e.g., a LLM-driven robot) or to launch a spear-phishing attack. Then, in terms of targeted misinformation, we show that our attacks provide higher attack effectiveness than the existing baseline and, for the purpose of attracting downloads, preserve or improve the adapter’s utility. Finally, we designed and evaluated three potential defenses. However, none proved entirely effective in safeguarding against our attacks, highlighting the need for more robust defenses supporting a secure LLM supply chain.

View More Papers

Defending Against Membership Inference Attacks on Iteratively Pruned Deep...

Jing Shang (Beijing Jiaotong University), Jian Wang (Beijing Jiaotong University), Kailun Wang (Beijing Jiaotong University), Jiqiang Liu (Beijing Jiaotong University), Nan Jiang (Beijing University of Technology), Md Armanuzzaman (Northeastern University), Ziming Zhao (Northeastern University)

Read More

Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based Wireless Communication...

Jung-Woo Chang (University of California, San Diego), Ke Sun (University of California, San Diego), Nasimeh Heydaribeni (University of California, San Diego), Seira Hidano (KDDI Research, Inc.), Xinyu Zhang (University of California, San Diego), Farinaz Koushanfar (University of California, San Diego)

Read More

No Source Code? No Problem! Twenty Years of Research...

Jack W. Davidson, Professor of Computer Science in the School of Engineering and Applied Science, University of Virginia

Read More

YuraScanner: Leveraging LLMs for Task-driven Web App Scanning

Aleksei Stafeev (CISPA Helmholtz Center for Information Security), Tim Recktenwald (CISPA Helmholtz Center for Information Security), Gianluca De Stefano (CISPA Helmholtz Center for Information Security), Soheil Khodayari (CISPA Helmholtz Center for Information Security), Giancarlo Pellegrino (CISPA Helmholtz Center for Information Security)

Read More