Viet Quoc Vo (The University of Adelaide), Ehsan Abbasnejad (The University of Adelaide), Damith C. Ranasinghe (University of Adelaide)

Machine learning models are critically susceptible to evasion attacks from adversarial examples. Generally, adversarial examples—modified inputs deceptively similar to the original input—are constructed under whitebox access settings by adversaries with full access to the model. However, recent attacks have shown a remarkable reduction in the number of queries to craft adversarial examples using blackbox attacks. Particularly alarming is the now, practical, ability to exploit simply the classification decision (hard-label only) from a trained model’s access interface provided by a growing number of Machine Learning as a Service (MLaaS) providers—including Google, Microsoft, IBM—and used by a plethora of applications incorporating these models. An adversary’s ability to exploit only the predicted hard-label from a model query to craft adversarial examples is distinguished as a decision-based attack.

In our study, we first deep-dive into recent state-of-the-art decision-based attacks in ICLR and S&P to highlight the costly nature of discovering low distortion adversarial examples employing approximate gradient estimation methods. We develop a robust class of query efficient attacks capable of avoiding entrapment in a local minimum and misdirection from noisy gradients seen in gradient estimation methods. The attack method we propose, RamBoAttack, exploits the notion of Randomized Block Coordinate Descent to explore the hidden classifier manifold, targeting perturbations to manipulate only localized input features to address the issues of gradient estimation methods. Importantly, the RamBoAttack is demonstrably more robust to the different sample inputs available to an adversary and/or the targeted class. Overall, for a given target class, RamBoAttack is demonstrated to be more robust at achieving a lower distortion and higher attack success rate within a given query budget. We curate our results using the large-scale high-resolution ImageNet dataset and open-source our attack, test samples and artifacts.

View More Papers

A Study on Security and Privacy Practices in Danish...

Asmita Dalela (IT University of Copenhagen), Saverio Giallorenzo (Department of Computer Science and Engineering - University of Bologna), Oksana Kulyk (ITU Copenhagen), Jacopo Mauro (University of Southern Denmark), Elda Paja (IT University of Copenhagen)

Read More

Demo #4: Recovering Autonomous Robotic Vehicles from Physical Attacks

Pritam Dash (University of British Columbia) and Karthik Pattabiraman (University of British Columbia)

Read More

FirmWire: Transparent Dynamic Analysis for Cellular Baseband Firmware

Grant Hernandez (University of Florida), Marius Muench (Vrije Universiteit Amsterdam), Dominik Maier (TU Berlin), Alyssa Milburn (Vrije Universiteit Amsterdam), Shinjo Park (TU Berlin), Tobias Scharnowski (Ruhr-University Bochum), Tyler Tucker (University of Florida), Patrick Traynor (University of Florida), Kevin Butler (University of Florida)

Read More