Weiran Lin (Carnegie Mellon University), Keane Lucas (Carnegie Mellon University), Neo Eyal (Tel Aviv University), Lujo Bauer (Carnegie Mellon University), Michael K. Reiter (Duke University), Mahmood Sharif (Tel Aviv University)

Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. Specifically, we find that conventional metrics measuring targeted and untargeted robustness do not appropriately reflect a model's ability to withstand attacks from one set of source classes to another text set of target classes. To address the shortcomings of existing methods, we formally define a new metric, termed group-based robustness, that complements existing metrics and is better-suited for evaluating model performance in certain attack scenarios. We show empirically that group-based robustness allows us to distinguish between models' vulnerability against specific threat models in situations where traditional robustness metrics do not apply. Moreover, to measure group-based robustness efficiently and accurately, we 1) propose two loss functions and 2) identify three new attack strategies. We show empirically that with comparable success rates, finding evasive samples using our new loss functions saves computation by a factor as large as the number of targeted classes, and finding evasive samples using our new attack strategies saves time by up to 99% compared to brute-force search methods. Finally, we propose a defense method that increases group-based robustness by up to 3.52 times.

View More Papers

Strengthening Privacy in Robust Federated Learning through Secure Aggregation

Tianyue Chu, Devriş İşler (IMDEA Networks Institute & Universidad Carlos III de Madrid), Nikolaos Laoutaris (IMDEA Networks Institute)

Read More

WIP: Shadow Hack: Adversarial Shadow Attack Against LiDAR Object...

Ryunosuke Kobayashi, Kazuki Nomoto, Yuna Tanaka, Go Tsuruoka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Read More

Designing and Evaluating a Testbed for the Matter Protocol:...

Ravindra Mangar (Dartmouth College) Jingyu Qian (University of Illinois), Wondimu Zegeye (Morgan State University), Abdulrahman AlRabah, Ben Civjan, Shalni Sundram, Sam Yuan, Carl A. Gunter (University of Illinois), Mounib Khanafer (American University of Kuwait), Kevin Kornegay (Morgan State University), Timothy J. Pierson, David Kotz (Dartmouth College)

Read More

NODLINK: An Online System for Fine-Grained APT Attack Detection...

Shaofei Li (Key Laboratory of High-Confidence Software Technologies (MOE), School of Computer Science, Peking University), Feng Dong (Huazhong University of Science and Technology), Xusheng Xiao (Arizona State University), Haoyu Wang (Huazhong University of Science and Technology), Fei Shao (Case Western Reserve University), Jiedong Chen (Sangfor Technologies Inc.), Yao Guo (Key Laboratory of High-Confidence Software Technologies…

Read More