Tao Chen (City University of Hong Kong), Longfei Shangguan (Microsoft), Zhenjiang Li (City University of Hong Kong), Kyle Jamieson (Princeton University)

This paper presents Metamorph, a system that generates imperceptible audio that can survive over-the-air transmission to attack the neural network of a speech recognition system. The key challenge stems from how to ensure the added perturbation of the original audio in advance at the sender side is immune to unknown signal distortions during the transmission process. Our empirical study reveals that signal distortion is mainly due to device and channel frequency selectivity but with different characteristics. This brings a chance to capture and further pre-code this impact to generate adversarial examples that are robust to the over-the-air transmission. We leverage this opportunity in Metamorph and obtain an initial perturbation that captures the core distortion's impact from only a small set of prior measurements, and then take advantage of a domain adaptation algorithm to refine the perturbation to further improve the attack distance and reliability. Moreover, we consider also reducing human perceptibility of the added perturbation. Evaluation achieves a high attack success rate (95%) over the attack distance of up to 6 m. Within a moderate distance, e.g., 3 m, Metamorph maintains a high success rate (98%), yet can be further adapted to largely improve the audio quality, confirmed by a human perceptibility study.

View More Papers

CDN Judo: Breaking the CDN DoS Protection with Itself

Run Guo (Tsinghua University), Weizhong Li (Tsinghua University), Baojun Liu (Tsinghua University), Shuang Hao (University of Texas at Dallas), Jia Zhang (Tsinghua University), Haixin Duan (Tsinghua University), Kaiwen Sheng (Tsinghua University), Jianjun Chen (ICSI), Ying Liu (Tsinghua University)

Read More

Et Tu Alexa? When Commodity WiFi Devices Turn into...

Yanzi Zhu (UC Santa Barbara), Zhujun Xiao (University of Chicago), Yuxin Chen (University of Chicago), Zhijing Li (UC Santa Barbara), Max Liu (University of Chicago), Ben Y. Zhao (University of Chicago), Heather Zheng (University of Chicago)

Read More

Finding Safety in Numbers with Secure Allegation Escrows

Venkat Arun (Massachusetts Institute of Technology), Aniket Kate (Purdue University), Deepak Garg (Max Planck Institute for Software Systems), Peter Druschel (Max Planck Institute for Software Systems), Bobby Bhattacharjee (University of Maryland)

Read More

PhantomCache: Obfuscating Cache Conflicts with Localized Randomization

Qinhan Tan (Zhejiang University), Zhihua Zeng (Zhejiang University), Kai Bu (Zhejiang University), Kui Ren (Zhejiang University)

Read More