Weiran Lin (Carnegie Mellon University), Keane Lucas (Carnegie Mellon University), Neo Eyal (Tel Aviv University), Lujo Bauer (Carnegie Mellon University), Michael K. Reiter (Duke University), Mahmood Sharif (Tel Aviv University)

Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. Specifically, we find that conventional metrics measuring targeted and untargeted robustness do not appropriately reflect a model's ability to withstand attacks from one set of source classes to another text set of target classes. To address the shortcomings of existing methods, we formally define a new metric, termed group-based robustness, that complements existing metrics and is better-suited for evaluating model performance in certain attack scenarios. We show empirically that group-based robustness allows us to distinguish between models' vulnerability against specific threat models in situations where traditional robustness metrics do not apply. Moreover, to measure group-based robustness efficiently and accurately, we 1) propose two loss functions and 2) identify three new attack strategies. We show empirically that with comparable success rates, finding evasive samples using our new loss functions saves computation by a factor as large as the number of targeted classes, and finding evasive samples using our new attack strategies saves time by up to 99% compared to brute-force search methods. Finally, we propose a defense method that increases group-based robustness by up to 3.52 times.

View More Papers

Securing the Satellite Software Stack

Samuel Jero (MIT Lincoln Laboratory), Juliana Furgala (MIT Lincoln Laboratory), Max A Heller (MIT Lincoln Laboratory), Benjamin Nahill (MIT Lincoln Laboratory), Samuel Mergendahl (MIT Lincoln Laboratory), Richard Skowyra (MIT Lincoln Laboratory)

Read More

Sticky Fingers: Resilience of Satellite Fingerprinting against Jamming Attacks

Joshua Smailes (University of Oxford), Edd Salkield (University of Oxford), Sebastian Köhler (University of Oxford), Simon Birnbach (University of Oxford), Martin Strohmeier (Cyber-Defence Campus, armasuisse S+T), Ivan Martinovic (University of Oxford)

Read More

BreakSPF: How Shared Infrastructures Magnify SPF Vulnerabilities Across the...

Chuhan Wang (Tsinghua University), Yasuhiro Kuranaga (Tsinghua University), Yihang Wang (Tsinghua University), Mingming Zhang (Zhongguancun Laboratory), Linkai Zheng (Tsinghua University), Xiang Li (Tsinghua University), Jianjun Chen (Tsinghua University; Zhongguancun Laboratory), Haixin Duan (Tsinghua University; Quan Cheng Lab; Zhongguancun Laboratory), Yanzhong Lin (Coremail Technology Co. Ltd), Qingfeng Pan (Coremail Technology Co. Ltd)

Read More

PANDORA: Jailbreak GPTs by Retrieval Augmented Generation Poisoning

Gelei Deng, Yi Liu (Nanyang Technological University), Yuekang Li (The University of New South Wales), Wang Kailong(Huazhong University of Science and Technology), Tianwei Zhang, Yang Liu (Nanyang Technological University)

Read More