Kavita Kumari (Technical University of Darmstadt, Germany), Alessandro Pegoraro (Technical University of Darmstadt), Hossein Fereidooni (Technische Universität Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

The potential misuse of ChatGPT and other Large Language Models (LLMs) has raised concerns regarding the dissemination of false information, plagiarism, academic dishonesty, and fraudulent activities. Consequently, distinguishing between AI-generated and human-generated content has emerged as an intriguing research topic. However, current text detection methods lack precision and are often restricted to specific tasks or domains, making them inadequate for identifying content generated by ChatGPT. In this paper, we propose an effective ChatGPT detector named DEMASQ, which accurately identifies ChatGPT-generated content. Our method addresses two critical factors: (i) the distinct biases in text composition observed in human and machine-generated content and (ii) the alterations made by humans to evade previous detection methods. DEMASQ is an energy-based detection model that incorporates novel aspects, such as (i) optimization inspired by the Doppler effect to capture the interdependence between input text embeddings and output labels, and (ii) the use of explainable AI techniques to generate diverse perturbations. To evaluate our detector, we create a benchmark dataset comprising a mixture of prompts from both ChatGPT and humans, encompassing domains such as medical, open Q&A, finance, wiki, and Reddit. Our evaluation demonstrates that DEMASQ achieves high accuracy in identifying content generated by ChatGPT.

View More Papers

An Experimental Study on Attacking Homogeneous Averaging Processes via...

Olsan Ozbay (Dept. ECE, University of Maryland), Yuntao Liu (ISR, University of Maryland), Ankur Srivastava (Dept. ECE, ISR, University of Maryland)

Read More

GTrans: Graph Transformer-Based Obfuscation-resilient Binary Code Similarity Detection

Yun Zhang (Hunan University), Yuling Liu (Hunan University), Ge Cheng (Xiangtan University), Bo Ou (Hunan University)

Read More

DRAINCLoG: Detecting Rogue Accounts with Illegally-obtained NFTs using Classifiers...

Hanna Kim (KAIST), Jian Cui (Indiana University Bloomington), Eugene Jang (S2W Inc.), Chanhee Lee (S2W Inc.), Yongjae Lee (S2W Inc.), Jin-Woo Chung (S2W Inc.), Seungwon Shin (KAIST)

Read More

QUACK: Hindering Deserialization Attacks via Static Duck Typing

Yaniv David (Columbia University), Neophytos Christou (Brown University), Andreas D. Kellas (Columbia University), Vasileios P. Kemerlis (Brown University), Junfeng Yang (Columbia University)

Read More