Yiluo Wei (The Hong Kong University of Science and Technology (Guangzhou)), Peixian Zhang (The Hong Kong University of Science and Technology (Guangzhou)), Gareth Tyson (The Hong Kong University of Science and Technology (Guangzhou))

AI character platforms, which allow users to engage in conversations with AI personas, are a rapidly growing application domain. However, their immersive and personalized nature, combined with technical vulnerabilities, raises significant safety concerns. Despite their popularity, a systematic evaluation of their safety has been notably absent. To address this gap, we conduct the first large-scale safety study of AI character platforms, evaluating 16 popular platforms using a benchmark set of 5,000 questions across 16 safety categories. Our findings reveal a critical safety deficit: AI character platforms exhibit an average unsafe response rate of 65.1%, substantially higher than the 17.7% average rate of the baselines. We further discover that safety performance varies significantly across different characters and is strongly correlated with character features such as demographics and personality. Leveraging these insights, we demonstrate that our machine learning model is able identify less safe characters with an F1-score of 0.81. This predictive capability can be beneficial for platforms, enabling improved mechanisms for safer interactions, character search/recommendations, and character creation. Overall, the results and findings offer valuable insights for enhancing platform governance and content moderation for safer AI character platforms.

View More Papers

Hey there! You are using WhatsApp: Enumerating Three Billion...

Gabriel Karl Gegenhuber (University of Vienna), Philipp Frenzel (SBA Research), Maximilian Günther (University of Vienna), Johanna Ullrich (University of Vienna), Aljosha Judmayer (University of Vienna)

Read More

NEXUS: Towards Accurate and Scalable Mapping between Vulnerabilities and...

Ehsan Khodayarseresht (Concordia University), Suryadipta Majumdar (Concordia University), Serguei Mokhov (Concordia University), Mourad Debbabi (Concordia University)

Read More

BSFuzzer: Context-Aware Semantic Fuzzing for BLE Logic Flaw Detection

Ting Yang (Xidian University and Kanazawa University), Yue Qin (Central University of Finance and Economics), Lan Zhang (Northern Arizona University), Zhiyuan Fu (Hainan University), Junfan Chen (Hainan University), Jice Wang (Hainan University), Shangru Zhao (University of Chinese Academy of Sciences), Qi Li (Tsinghua University), Ruidong Li (Kanazawa University), He Wang (Xidian University), Yuqing Zhang (University…

Read More