Yanzuo Chen (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Yuanyuan Yuan (The Hong Kong University of Science and Technology), Sihang Hu (Huawei Technologies), Tianxiang Li (Huawei Technologies), Shuai Wang (The Hong Kong University of Science and Technology)

Recent research has shown that bit-flip attacks (BFAs) can manipulate deep neural networks (DNNs) via DRAM Rowhammer exploitations. For high-level DNN models running on deep learning (DL) frameworks like PyTorch, extensive BFAs have been conducted to flip bits in model weights and shown effective. Defenses have also been proposed to guard model weights. Nevertheless, DNNs are increasingly compiled into DNN executables by DL compilers to leverage hardware primitives. These executables manifest new and distinct computation paradigms; we find existing research failing to accurately capture and expose the attack surface of BFAs on DNN executables.

To this end, we launch the first systematic study of BFAs on DNN executables and reveal new attack surfaces neglected or underestimated in previous work. Specifically, prior BFAs in DL frameworks are limited to attacking model weights and assume a strong whitebox attacker with full knowledge of victim model weights, which is unrealistic as weights are often confidential. In contrast, we find that BFAs on DNN executables can achieve high effectiveness by exploiting the model structure (usually stored in the executable code), which only requires knowing the (often public) model structure. Importantly, such structure-based BFAs are pervasive, transferable, and more severe (e.g., single-bit flips lead to successful attacks) in DNN executables; they also slip past existing defenses.

To realistically demonstrate the new attack surfaces, we assume a weak and more realistic attacker with no knowledge of victim model weights. We design an automated tool to identify vulnerable bits in victim executables with high confidence (70% compared to the baseline 2%). Launching this tool on DDR4 DRAM, we show that only 1.4 flips on average are needed to fully downgrade the accuracy of victim executables, including quantized models which could require 23× more flips previously, to random guesses. We comprehensively evaluate 16 DNN executables, covering three large-scale DNN models trained on three commonly-used datasets compiled by the two most popular DL compilers. Our finding calls for incorporating security mechanisms in future DNN compilation toolchains.

View More Papers

Blackbox Fuzzing of Distributed Systems with Multi-Dimensional Inputs and...

Yonghao Zou (Beihang University and Peking University), Jia-Ju Bai (Beihang University), Zu-Ming Jiang (ETH Zurich), Ming Zhao (Arizona State University), Diyu Zhou (Peking University)

Read More

GAP-Diff: Protecting JPEG-Compressed Images from Diffusion-based Facial Customization

Haotian Zhu (Nanjing University of Science and Technology), Shuchao Pang (Nanjing University of Science and Technology), Zhigang Lu (Western Sydney University), Yongbin Zhou (Nanjing University of Science and Technology), Minhui Xue (CSIRO's Data61)

Read More

Speak Up, I’m Listening: Extracting Speech from Zero-Permission VR...

Derin Cayir (Florida International University), Reham Mohamed Aburas (American University of Sharjah), Riccardo Lazzeretti (Sapienza University of Rome), Marco Angelini (Link Campus University of Rome), Abbas Acar (Florida International University), Mauro Conti (University of Padua), Z. Berkay Celik (Purdue University), Selcuk Uluagac (Florida International University)

Read More

Enhancing Security in Third-Party Library Reuse – Comprehensive Detection...

Shangzhi Xu (The University of New South Wales), Jialiang Dong (The University of New South Wales), Weiting Cai (Delft University of Technology), Juanru Li (Feiyu Tech), Arash Shaghaghi (The University of New South Wales), Nan Sun (The University of New South Wales), Siqi Ma (The University of New South Wales)

Read More