Hadi Abdullah (Visa Research), Aditya Karlekar (University of Florida), Saurabh Prasad (University of Florida), Muhammad Sajidur Rahman (University of Florida), Logan Blue (University of Florida), Luke A. Bauer (University of Florida), Vincent Bindschaedler (University of Florida), Patrick Traynor (University of Florida)

Audio CAPTCHAs are supposed to provide a strong defense for online resources; however, advances in speech-to-text mechanisms have rendered these defenses ineffective. Audio CAPTCHAs cannot simply be abandoned, as they are specifically named by the W3C as important enablers of accessibility. Accordingly, demonstrably more robust audio CAPTCHAs are important to the future of a secure and accessible Web. We look to recent literature on attacks on speech-to-text systems for inspiration for the construction of robust, principle-driven audio defenses. We begin by comparing 20 recent attack papers, classifying and measuring their suitability to serve as the basis of new "robust to transcription" but "easy for humans to understand" CAPTCHAs. After showing that none of these attacks alone are sufficient, we propose a new mechanism that is both comparatively intelligible (evaluated through a user study) and hard to automatically transcribe (i.e., $P({rm transcription}) = 4 times 10^{-5}$). We also demonstrate that our audio samples have a high probability of being detected as CAPTCHAs when given to speech-to-text systems ($P({rm evasion}) = 1.77 times 10^{-4}$). Finally, we show that our method is robust to WaveGuard, a popular mechanism designed to defeat adversarial examples (and enable ASRs to output the original transcript instead of the adversarial one). We show that our method can break WaveGuard with a 99% success rate. In so doing, we not only demonstrate a CAPTCHA that is approximately four orders of magnitude more difficult to crack, but that such systems can be designed based on the insights gained from attack papers using the differences between the ways that humans and computers process audio.

View More Papers

Un-Rocking Drones: Foundations of Acoustic Injection Attacks and Recovery...

Jinseob Jeong (KAIST, Agency for Defense Development), Dongkwan Kim (Samsung SDS), Joonha Jang (KAIST), Juhwan Noh (KAIST), Changhun Song (KAIST), Yongdae Kim (KAIST)

Read More

Preventing SIM Box Fraud Using Device Model Fingerprinting

BeomSeok Oh (KAIST), Junho Ahn (KAIST), Sangwook Bae (KAIST), Mincheol Son (KAIST), Yonghwa Lee (KAIST), Min Suk Kang (KAIST), Yongdae Kim (KAIST)

Read More

Position Paper: Space System Threat Models Must Account for...

Benjamin Cyr and Yan Long (University of Michigan), Takeshi Sugawara (The University of Electro-Communications), Kevin Fu (Northeastern University)

Read More

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder...

Wenjie Qu (Huazhong University of Science and Technology), Jinyuan Jia (University of Illinois Urbana-Champaign), Neil Zhenqiang Gong (Duke University)

Read More