Zhibo Jin (The University of Sydney), Jiayu Zhang (Suzhou Yierqi), Zhiyu Zhu, Huaming Chen (The University of Sydney)

The robustness of deep learning models against adversarial attacks remains a pivotal concern. This study presents, for the first time, an exhaustive review of the transferability aspect of adversarial attacks. It systematically categorizes and critically evaluates various methodologies developed to augment the transferability of adversarial attacks. This study encompasses a spectrum of techniques, including Generative Structure, Semantic Similarity, Gradient Editing, Target Modification, and Ensemble Approach. Concurrently, this paper introduces a benchmark framework TAA-Bench, integrating ten leading methodologies for adversarial attack transferability, thereby providing a standardized and systematic platform for comparative analysis across diverse model architectures. Through comprehensive scrutiny, we delineate the efficacy and constraints of each method, shedding light on their underlying operational principles and practical utility. This review endeavors to be a quintessential resource for both scholars and practitioners in the field, charting the complex terrain of adversarial transferability and setting a foundation for future explorations in this vital sector. The associated codebase is accessible at: https://github.com/KxPlaug/TAA-Bench

View More Papers

Proof of Backhaul: Trustfree Measurement of Broadband Bandwidth

Peiyao Sheng (Kaleidoscope Blockchain Inc.), Nikita Yadav (Indian Institute of Science), Vishal Sevani (Kaleidoscope Blockchain Inc.), Arun Babu (Kaleidoscope Blockchain Inc.), Anand Svr (Kaleidoscope Blockchain Inc.), Himanshu Tyagi (Indian Institute of Science), Pramod Viswanath (Kaleidoscope Blockchain Inc.)

Read More

ActiveDaemon: Unconscious DNN Dormancy and Waking Up via User-specific...

Ge Ren (Shanghai Jiao Tong University), Gaolei Li (Shanghai Jiao Tong University), Shenghong Li (Shanghai Jiao Tong University), Libo Chen (Shanghai Jiao Tong University), Kui Ren (Zhejiang University)

Read More

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks...

Zitao Chen (University of British Columbia), Karthik Pattabiraman (University of British Columbia)

Read More

WIP: Modeling and Detecting Falsified Vehicle Trajectories Under Data...

Jun Ying, Yiheng Feng (Purdue University), Qi Alfred Chen (University of California, Irvine), Z. Morley Mao (University of Michigan and Google)

Read More