Kavita Kumari (Technical University of Darmstadt), Maryam Abbasihafshejani (University of Texas at San Antonio), Alessandro Pegoraro (Technical University of Darmstadt), Phillip Rieger (Technical University of Darmstadt), Kamyar Arshi (Technical University of Darmstadt), Murtuza Jadliwala (University of Texas at San Antonio), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Recent advancements in synthetic speech generation, including text-to-speech (TTS) and voice conversion (VC) models, allow the generation of convincing synthetic voices, often referred to as audio deepfakes. These deepfakes pose a growing threat as adversaries can use them to impersonate individuals, particularly prominent figures, on social media or bypass voice authentication systems, thus having a broad societal impact. The inability of state-of-the-art verification systems to detect voice deepfakes effectively is alarming.
We propose a novel audio deepfake detection method, VoiceRadar, that augments machine learning with physical models to approximate frequency dynamics and oscillations in audio samples. This significantly enhances detection capabilities. VoiceRadar leverages two main physical models: (i) the Doppler effect to understand frequency changes in audio samples and (ii) drumhead vibrations to decompose complex audio signals into component frequencies. VoiceRadar identifies subtle variations, or micro-frequencies, in the audio signals by applying these models. These micro-frequencies are aggregated to compute the observed frequency, capturing the unique signature of the audio. This observed frequency is integrated into the machine learning algorithm’s loss function, enabling the algorithm to recognize distinct patterns that differentiate human-produced audio from AI-generated audio.
We constructed a new diverse dataset to comprehensively evaluate VoiceRadar, featuring samples from leading TTS and VC models. Our results demonstrate that VoiceRadar outperforms existing methods in accurately identifying AI-generated audio samples, showcasing its potential as a robust tool for audio deepfake detection.

View More Papers

Diffence: Fencing Membership Privacy With Diffusion Models

Yuefeng Peng (University of Massachusetts Amherst), Ali Naseh (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

Read More

CASPR: Context-Aware Security Policy Recommendation

Lifang Xiao (Institute of Information Engineering, Chinese Academy of Sciences), Hanyu Wang (Institute of Information Engineering, Chinese Academy of Sciences), Aimin Yu (Institute of Information Engineering, Chinese Academy of Sciences), Lixin Zhao (Institute of Information Engineering, Chinese Academy of Sciences), Dan Meng (Institute of Information Engineering, Chinese Academy of Sciences)

Read More

A Large-Scale Measurement Study of the PROXY Protocol and...

Stijn Pletinckx (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara)

Read More

The Skeleton Keys: A Large Scale Analysis of Credential...

Yizhe Shi (Fudan University), Zhemin Yang (Fudan University), Kangwei Zhong (Fudan University), Guangliang Yang (Fudan University), Yifan Yang (Fudan University), Xiaohan Zhang (Fudan University), Min Yang (Fudan University)

Read More