Kavita Kumari (Technical University of Darmstadt), Maryam Abbasihafshejani (University of Texas at San Antonio), Alessandro Pegoraro (Technical University of Darmstadt), Phillip Rieger (Technical University of Darmstadt), Kamyar Arshi (Technical University of Darmstadt), Murtuza Jadliwala (University of Texas at San Antonio), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Recent advancements in synthetic speech generation, including text-to-speech (TTS) and voice conversion (VC) models, allow the generation of convincing synthetic voices, often referred to as audio deepfakes. These deepfakes pose a growing threat as adversaries can use them to impersonate individuals, particularly prominent figures, on social media or bypass voice authentication systems, thus having a broad societal impact. The inability of state-of-the-art verification systems to detect voice deepfakes effectively is alarming.
We propose a novel audio deepfake detection method, VoiceRadar, that augments machine learning with physical models to approximate frequency dynamics and oscillations in audio samples. This significantly enhances detection capabilities. VoiceRadar leverages two main physical models: (i) the Doppler effect to understand frequency changes in audio samples and (ii) drumhead vibrations to decompose complex audio signals into component frequencies. VoiceRadar identifies subtle variations, or micro-frequencies, in the audio signals by applying these models. These micro-frequencies are aggregated to compute the observed frequency, capturing the unique signature of the audio. This observed frequency is integrated into the machine learning algorithm’s loss function, enabling the algorithm to recognize distinct patterns that differentiate human-produced audio from AI-generated audio.
We constructed a new diverse dataset to comprehensively evaluate VoiceRadar, featuring samples from leading TTS and VC models. Our results demonstrate that VoiceRadar outperforms existing methods in accurately identifying AI-generated audio samples, showcasing its potential as a robust tool for audio deepfake detection.

View More Papers

Be Careful of What You Embed: Demystifying OLE Vulnerabilities

Yunpeng Tian (Huazhong University of Science and Technology), Feng Dong (Huazhong University of Science and Technology), Haoyi Liu (Huazhong University of Science and Technology), Meng Xu (University of Waterloo), Zhiniang Peng (Huazhong University of Science and Technology; Sangfor Technologies Inc.), Zesen Ye (Sangfor Technologies Inc.), Shenghui Li (Huazhong University of Science and Technology), Xiapu Luo…

Read More

GadgetMeter: Quantitatively and Accurately Gauging the Exploitability of Speculative...

Qi Ling (Purdue University), Yujun Liang (Tsinghua University), Yi Ren (Tsinghua University), Baris Kasikci (University of Washington and Google), Shuwen Deng (Tsinghua University)

Read More

Power-Related Side-Channel Attacks using the Android Sensor Framework

Mathias Oberhuber (Graz University of Technology), Martin Unterguggenberger (Graz University of Technology), Lukas Maar (Graz University of Technology), Andreas Kogler (Graz University of Technology), Stefan Mangard (Graz University of Technology)

Read More

WIP: Towards Privacy Compliance by Design in the Matter...

Yichen Liu (Indiana University Bloomington), Jingwen Yan (Clemson University), Song Liao (Texas Tech University), Long Cheng (Clemson University), Luyi Xing (Indiana University Bloomington)

Read More