Tao Chen (City University of Hong Kong), Longfei Shangguan (Microsoft), Zhenjiang Li (City University of Hong Kong), Kyle Jamieson (Princeton University)

This paper presents Metamorph, a system that generates imperceptible audio that can survive over-the-air transmission to attack the neural network of a speech recognition system. The key challenge stems from how to ensure the added perturbation of the original audio in advance at the sender side is immune to unknown signal distortions during the transmission process. Our empirical study reveals that signal distortion is mainly due to device and channel frequency selectivity but with different characteristics. This brings a chance to capture and further pre-code this impact to generate adversarial examples that are robust to the over-the-air transmission. We leverage this opportunity in Metamorph and obtain an initial perturbation that captures the core distortion's impact from only a small set of prior measurements, and then take advantage of a domain adaptation algorithm to refine the perturbation to further improve the attack distance and reliability. Moreover, we consider also reducing human perceptibility of the added perturbation. Evaluation achieves a high attack success rate (95%) over the attack distance of up to 6 m. Within a moderate distance, e.g., 3 m, Metamorph maintains a high success rate (98%), yet can be further adapted to largely improve the audio quality, confirmed by a human perceptibility study.

View More Papers

Complex Security Policy? A Longitudinal Analysis of Deployed Content...

Sebastian Roth (CISPA Helmholtz Center for Information Security), Timothy Barron (Stony Brook University), Stefano Calzavara (Università Ca' Foscari Venezia), Nick Nikiforakis (Stony Brook University), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More

SODA: A Generic Online Detection Framework for Smart Contracts

Ting Chen (University of Electronic Science and Technology of China), Rong Cao (University of Electronic Science and Technology of China), Ting Li (University of Electronic Science and Technology of China), Xiapu Luo (The Hong Kong Polytechnic University), Guofei Gu (Texas A&M University), Yufei Zhang (University of Electronic Science and Technology of China), Zhou Liao (University…

Read More

The Attack of the Clones Against Proof-of-Authority

Parinya Ekparinya (University of Sydney), Vincent Gramoli (University of Sydney and CSIRO-Data61), Guillaume Jourjon (CSIRO-Data61)

Read More

CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples

Honggang Yu (University of Florida), Kaichen Yang (University of Florida), Teng Zhang (University of Central Florida), Yun-Yun Tsai (National Tsing Hua University), Tsung-Yi Ho (National Tsing Hua University), Yier Jin (University of Florida)

Read More