Friedemann Lipphardt (MPI-INF), Moonis Ali (MPI-INF), Martin Banzer (MPI-INF), Anja Feldmann (MPI-INF), Devashish Gosain (IIT Bombay)

Large language models (LLMs) are widely used for information access, yet their content moderation behavior varies sharply across geographic and linguistic contexts. This paper presents a first comprehensive analysis of content moderation patterns detected in over 700,000 replies from 15 leading LLMs evaluated from 12 locations using 1,118 sensitive queries spanning five categories in 13 languages.

We find substantial geographic variation, with moderation rates showing relative differences up to 60% across locations—for instance, soft moderation (e.g., evasive replies) appears in 14.3% of German contexts versus 24.9% in Zulu contexts. Category-wise, misc. (generally unsafe), hate speech, and sexual content are more heavily moderated than political or religious content, with political content showing the most geographic variability. We also observe discrepancies between online and offline model versions, such as DeepSeek exhibiting 15.2% higher relative soft moderation rates when deployed locally than via API. The response length (and time) analysis reveals that moderated responses are, on average, about 50% shorter than the unmoderated ones.

These findings have important implications for AI fairness and digital equity, as users in different locations receive inconsistent access to information. We provide the first systematic evidence of geographic cross-language bias in LLM content moderation and showcase how model selection vastly impacts user experience.

View More Papers

cwPSU: Efficient Unbalanced Private Set Union via Constant-weight Codes

Qingwen Li (Xidian University), Song Bian (Beihang University), Hui Li (Xidian University)

Read More

KnowHow: Automatically Applying High-Level CTI Knowledge for Interpretable and...

Yuhan Meng (Key Laboratory of High-Confidence Software Technologies (MOE), School of Computer Science, Peking University), Shaofei Li (Key Laboratory of High-Confidence Software Technologies (MOE), School of Computer Science, Peking University), Jiaping Gui (School of Computer Science, Shanghai Jiao Tong University), Peng Jiang (Southeast University), Ding Li (Key Laboratory of High-Confidence Software Technologies (MOE), School of…

Read More

Kangaroo: A Private and Amortized Inference Framework over WAN...

Wei Xu (Xidian University), Hui Zhu (Xidian University), Yandong Zheng (Xidian University), Song Bian (Beihang University), Ning Sun (Xidian University), Yuan Hao (Xidian University), Dengguo Feng (School of Cyber Science and Technology), Hui Li (Xidian University)

Read More