Friedemann Lipphardt (MPI-INF), Moonis Ali (MPI-INF), Martin Banzer (MPI-INF), Anja Feldmann (MPI-INF), Devashish Gosain (IIT Bombay)

Large language models (LLMs) are widely used for information access, yet their content moderation behavior varies sharply across geographic and linguistic contexts. This paper presents a first comprehensive analysis of content moderation patterns detected in over 700,000 replies from 15 leading LLMs evaluated from 12 locations using 1,118 sensitive queries spanning five categories in 13 languages.

We find substantial geographic variation, with moderation rates showing relative differences up to 60% across locations—for instance, soft moderation (e.g., evasive replies) appears in 14.3% of German contexts versus 24.9% in Zulu contexts. Category-wise, misc. (generally unsafe), hate speech, and sexual content are more heavily moderated than political or religious content, with political content showing the most geographic variability. We also observe discrepancies between online and offline model versions, such as DeepSeek exhibiting 15.2% higher relative soft moderation rates when deployed locally than via API. The response length (and time) analysis reveals that moderated responses are, on average, about 50% shorter than the unmoderated ones.

These findings have important implications for AI fairness and digital equity, as users in different locations receive inconsistent access to information. We provide the first systematic evidence of geographic cross-language bias in LLM content moderation and showcase how model selection vastly impacts user experience.

View More Papers

Pitfalls for Security Isolation in Multi-CPU Systems

Simeon Hoffmann (CISPA Helmholtz Center for Information Security), Nils Ole Tippenhauer (CISPA Helmholtz Center for Information Security)

Read More

BunnyFinder: Finding Incentive Flaws for Ethereum Consensus

Rujia Li (Tsinghua University and State Key Laboratory of Cryptography and Digital Economy Security), Mingfei Zhang (Shandong University), Xueqian Lu (Independent Reseacher), Wenbo Xu (Blockchain Platform Division, Ant Group), Ying Yan (Blockchain Platform Division, Ant Group), Sisi Duan (Tsinghua University, Zhongguancun Laboratory, Shandong Institute of Blockchains and State Key Laboratory of Cryptography and Digital Economy…

Read More

PriMod4AI: Lifecycle-Aware Privacy Threat Modeling for AI Systems using...

Gautam Savaliya (Deggendorf Institute of Technology, Germany), Robert Aufschlager (Deggendorf Institute of Technology, Germany), Abhishek Subedi (Deggendorf Institute of Technology, Germany), Michael Heigl (Deggendorf Institute of Technology, Germany), Martin Schramm (Deggendorf Institute of Technology, Germany)

Read More