Jinfeng Li (Zhejiang University), Shouling Ji (Zhejiang University), Tianyu Du (Zhejiang University), Bo Li (University of California, Berkeley), Ting Wang (Lehigh University)

Deep Learning-based Text Understanding (DLTU) is the backbone technique behind various applications, including question answering, machine translation, and text classification. Despite its tremendous popularity, the security vulnerabilities of DLTU are still largely unknown, which is highly concerning given its increasing use in security-sensitive applications such as user sentiment analysis and toxic content detection. In this paper, we show that DLTU is inherently vulnerable to adversarial text attacks, in which maliciously crafted text triggers target DLTU systems and services to misbehave. Specifically, we present TextBugger, a general attack framework for generating adversarial text. In contrast of prior work, TextBugger differs in significant ways: (i) effective -- it outperforms state-of-the-art attacks in terms of attack success rate; (ii) evasive -- it preserves the utility of benign text, with 94.9% of the adversarial text correctly recognized by human readers; and (iii) efficient -- it generates adversarial text with computational complexity sub-linear to the text length. We empirically evaluate TextBugger on a set of real-world DLTU systems and services used for sentiment analysis and toxic content detection, demonstrating its effectiveness, evasiveness, and efficiency. For instance, TextBugger achieves 100% success rate on the IMDB dataset based on Amazon AWS Comprehend within 4.61 seconds and preserves 97% semantic similarity. We further discuss possible defense mechanisms to mitigate such attack and the adversary's potential countermeasures, which leads to promising directions for further research.

View More Papers

Geo-locating Drivers: A Study of Sensitive Data Leakage in...

Qingchuan Zhao (The Ohio State University), Chaoshun Zuo (The Ohio State University), Giancarlo Pellegrino (CISPA, Saarland University; Stanford University), Zhiqiang Lin (The Ohio State University)

Read More

Graph-based Security and Privacy Analytics via Collective Classification with...

Binghui Wang (Iowa State University), Jinyuan Jia (Iowa State University), Neil Zhenqiang Gong (Iowa State University)

Read More

The use of TLS in Censorship Circumvention

Sergey Frolov (University of Colorado Boulder), Eric Wustrow (University of Colorado Boulder)

Read More

Latex Gloves: Protecting Browser Extensions from Probing and Revelation...

Alexander Sjösten (Chalmers University of Technology), Steven Van Acker (Chalmers University of Technology), Pablo Picazo-Sanchez (Chalmers University of Technology), Andrei Sabelfeld (Chalmers University of Technology)

Read More