Xiangxiang Chen (Zhejiang University), Peixin Zhang (Singapore Management University), Jun Sun (Singapore Management University), Wenhai Wang (Zhejiang University), Jingyi Wang (Zhejiang University)

Model quantization is a popular technique for deploying deep learning models on resource-constrained environments. However, it may also introduce previously overlooked security risks. In this work, we present QuRA, a novel backdoor attack that exploits model quantization to embed malicious behaviors. Unlike conventional backdoor attacks relying on training data poisoning or model training manipulation, QuRA solely works using the quantization operations. In particular, QuRA first employs a novel weight selection strategy to identify critical weights that influence the backdoor target (with the goal of perserving the model's overall performance in mind). Then, by optimizing the rounding direction of these weights, we amplify the backdoor effect across model layers without degrading accuracy. Extensive experiments demonstrate that QuRA achieves nearly 100% attack success rates in most cases, with negligible performance degradation. Furthermore, we show that QuRA can adapt to bypass existing backdoor defenses, underscoring its threat potential. Our findings highlight critical vulnerability in widely used model quantization process, emphasizing the need for more robust security measures. Our implementation is available at https://github.com/cxx122/QuRA.

View More Papers

Cross-Boundary Mobile Tracking: Exploring Java-to-JavaScript Information Diffusion in WebViews

Sohom Datta (North Carolina State University, USA), Michalis Diamantaris (TTechnical University of Crete, Greece), Ahsan Zafar (North Carolina State University, USA), Junhua Su (North Carolina State University, USA), Anupam Das (North Carolina State University, USA), Jason Polakis (University of Illinois Chicago, USA), Alexandros Kapravelos (North Carolina State University, USA)

Read More

ropbot: Reimaging Code Reuse Attack Synthesis

Kyle Zeng (Arizona State University), Moritz Schloegel (CISPA Helmholtz Center for Information Security), Christopher Salls (UC Santa Barbara), Adam Doupé (Arizona State University), Ruoyu Wang (Arizona State University), Yan Shoshitaishvili (Arizona State University), Tiffany Bao (Arizona State University)

Read More

InverTune: A Backdoor Defense Method for Multimodal Contrastive Learning...

Mengyuan Sun (Wuhan University), Yu Li (Wuhan University), Yunjie Ge (Wuhan University), Yuchen Liu (Wuhan University), Bo Du (Wuhan University), Qian Wang (Wuhan University)

Read More