Viet Quoc Vo (The University of Adelaide), Ehsan Abbasnejad (The University of Adelaide), Damith C. Ranasinghe (University of Adelaide)

Machine learning models are critically susceptible to evasion attacks from adversarial examples. Generally, adversarial examples—modified inputs deceptively similar to the original input—are constructed under whitebox access settings by adversaries with full access to the model. However, recent attacks have shown a remarkable reduction in the number of queries to craft adversarial examples using blackbox attacks. Particularly alarming is the now, practical, ability to exploit simply the classification decision (hard-label only) from a trained model’s access interface provided by a growing number of Machine Learning as a Service (MLaaS) providers—including Google, Microsoft, IBM—and used by a plethora of applications incorporating these models. An adversary’s ability to exploit only the predicted hard-label from a model query to craft adversarial examples is distinguished as a decision-based attack.

In our study, we first deep-dive into recent state-of-the-art decision-based attacks in ICLR and S&P to highlight the costly nature of discovering low distortion adversarial examples employing approximate gradient estimation methods. We develop a robust class of query efficient attacks capable of avoiding entrapment in a local minimum and misdirection from noisy gradients seen in gradient estimation methods. The attack method we propose, RamBoAttack, exploits the notion of Randomized Block Coordinate Descent to explore the hidden classifier manifold, targeting perturbations to manipulate only localized input features to address the issues of gradient estimation methods. Importantly, the RamBoAttack is demonstrably more robust to the different sample inputs available to an adversary and/or the targeted class. Overall, for a given target class, RamBoAttack is demonstrated to be more robust at achieving a lower distortion and higher attack success rate within a given query budget. We curate our results using the large-scale high-resolution ImageNet dataset and open-source our attack, test samples and artifacts.

View More Papers

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep...

Phillip Rieger (Technical University of Darmstadt), Thien Duc Nguyen (Technical University of Darmstadt), Markus Miettinen (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

GhostTalk: Interactive Attack on Smartphone Voice System Through Power...

Yuanda Wang (Michigan State University), Hanqing Guo (Michigan State University), Qiben Yan (Michigan State University)

Read More

A Study on Security and Privacy Practices in Danish...

Asmita Dalela (IT University of Copenhagen), Saverio Giallorenzo (Department of Computer Science and Engineering - University of Bologna), Oksana Kulyk (ITU Copenhagen), Jacopo Mauro (University of Southern Denmark), Elda Paja (IT University of Copenhagen)

Read More

Explainable AI in Cybersecurity Operations: Lessons Learned from xAI...

Megan Nyre-Yu (Sandia National Laboratories), Elizabeth S. Morris (Sandia National Laboratories), Blake Moss (Sandia National Laboratories), Charles Smutz (Sandia National Laboratories), Michael R. Smith (Sandia National Laboratories)

Read More