Qi Xia (University of Texas at San Antonio), Qian Chen (University of Texas at San Antonio)

Traditional black-box adversarial attacks on computer vision models face significant limitations, including intensive querying requirements, time-consuming iterative processes, a lack of universality, and low attack success rates (ASR) and confidence levels (CL) due to subtle perturbations. This paper introduces AlphaDog, an Alpha channel attack, the first universally efficient targeted no-box attack, exploiting the often overlooked Alpha channel in RGBA images to create visual disparities between human perception and machine interpretation, efficiently deceiving both. Specifically, AlphaDog maliciously sets the RGB channels to represent the desired object for AI recognition, while crafting the Alpha channel to create a different perception for humans when blended with a standard or default background color of digital media (thumbnail or image viewer apps). Leveraging differences in how AI models and human vision process transparency, AlphaDog outperforms existing adversarial attacks in four key ways: (i) as a no-box attack, it requires zero queries; (ii) it achieves highly efficient generation, taking milliseconds to produce arbitrary attack images; (iii) AlphaDog can be universally applied, compromising most AI models with a single attack image; (iv) it guarantees 100% ASR and CL. The assessment of 6,500 AlphaDog attack examples across 100 state-of-the-art image recognition systems demonstrates AlphaDog's effectiveness, and an IRB-approved experiment involving 20 college-age participants validates AlphaDog's stealthiness. AlphaDog can be applied in data poisoning, evasion attacks, and content moderation. Additionally, a novel pixel-intensity histogram-based detection method is introduced to identify AlphaDog, achieving 100% effectiveness in detecting and protecting computer vision models against AlphaDog. Demos are available on the AlphaDog website (https://sites.google.com/view/alphachannelattack/home).

View More Papers

What Makes Phishing Simulation Campaigns (Un)Acceptable? A Vignette Experiment

Jasmin Schwab (German Aerospace Center (DLR)), Alexander Nussbaum (University of the Bundeswehr Munich), Anastasia Sergeeva (University of Luxembourg), Florian Alt (University of the Bundeswehr Munich and Ludwig Maximilian University of Munich), and Verena Distler (Aalto University)

Read More

DLBox: New Model Training Framework for Protecting Training Data

Jaewon Hur (Seoul National University), Juheon Yi (Nokia Bell Labs, Cambridge, UK), Cheolwoo Myung (Seoul National University), Sangyun Kim (Seoul National University), Youngki Lee (Seoul National University), Byoungyoung Lee (Seoul National University)

Read More

RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial...

Dzung Pham (University of Massachusetts Amherst), Shreyas Kulkarni (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

Read More

Provably Unlearnable Data Examples

Derui Wang (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Bo Li (The University of Chicago), Seyit Camtepe (CSIRO's Data61), Liming Zhu (CSIRO's Data61)

Read More