Hui Xia (Ocean University of China), Rui Zhang (Ocean University of China), Zi Kang (Ocean University of China), Shuliang Jiang (Ocean University of China), Shuo Xu (Ocean University of China)

Although there has been extensive research on the transferability of adversarial attacks, existing methods for generating adversarial examples suffer from two significant drawbacks: poor stealthiness and low attack efficacy under low-round attacks. To address the above issues, we creatively propose an adversarial example generation method that ensembles the class activation maps of multiple models, called class activation mapping ensemble attack. We first use the class activation mapping method to discover the relationship between the decision of the Deep Neural Network and the image region. Then we calculate the class activation score for each pixel and use it as the weight for perturbation to enhance the stealthiness of adversarial examples and improve attack performance under low attack rounds. In the optimization process, we also ensemble class activation maps of multiple models to ensure the transferability of the adversarial attack algorithm. Experimental results show that our method generates adversarial examples with high perceptibility, transferability, attack performance under low-round attacks, and evasiveness. Specifically, when our attack capability is comparable to the most potent attack (VMIFGSM), our perceptibility is close to the best-performing attack (TPGD). For non-targeted attacks, our method outperforms the VMIFGSM by an average of 11.69% in attack capability against 13 target models and outperforms the TPGD by an average of 37.15%. For targeted attacks, our method achieves the fastest convergence, the most potent attack efficacy, and significantly outperforms the eight baseline methods in low-round attacks. Furthermore, our method can evade defenses and be used to assess the robustness of models.

View More Papers

SOCs lead AI adoption: Transitioning Lessons to the C-Suite

Eric Dull, Drew Walsh, Scott Riede (Deloitte and Touche)

Read More

Scrappy: SeCure Rate Assuring Protocol with PrivacY

Kosei Akama (Keio University), Yoshimichi Nakatsuka (ETH Zurich), Masaaki Sato (Tokai University), Keisuke Uehara (Keio University)

Read More

MPCDiff: Testing and Repairing MPC-Hardened Deep Learning Models

Qi Pang (Carnegie Mellon University), Yuanyuan Yuan (HKUST), Shuai Wang (HKUST)

Read More

Understanding the Implementation and Security Implications of Protective DNS...

Mingxuan Liu (Zhongguancun Laboratory; Tsinghua University), Yiming Zhang (Tsinghua University), Xiang Li (Tsinghua University), Chaoyi Lu (Tsinghua University), Baojun Liu (Tsinghua University), Haixin Duan (Tsinghua University; Zhongguancun Laboratory), Xiaofeng Zheng (Institute for Network Sciences and Cyberspace, Tsinghua University; QiAnXin Technology Research Institute & Legendsec Information Technology (Beijing) Inc.)

Read More