Friedemann Lipphardt (MPI-INF), Moonis Ali (MPI-INF), Martin Banzer (MPI-INF), Anja Feldmann (MPI-INF), Devashish Gosain (IIT Bombay)

Large language models (LLMs) are widely used for information access, yet their content moderation behavior varies sharply across geographic and linguistic contexts. This paper presents a first comprehensive analysis of content moderation patterns detected in over 700,000 replies from 15 leading LLMs evaluated from 12 locations using 1,118 sensitive queries spanning five categories in 13 languages.

We find substantial geographic variation, with moderation rates showing relative differences up to 60% across locations—for instance, soft moderation (e.g., evasive replies) appears in 14.3% of German contexts versus 24.9% in Zulu contexts. Category-wise, misc. (generally unsafe), hate speech, and sexual content are more heavily moderated than political or religious content, with political content showing the most geographic variability. We also observe discrepancies between online and offline model versions, such as DeepSeek exhibiting 15.2% higher relative soft moderation rates when deployed locally than via API. The response length (and time) analysis reveals that moderated responses are, on average, about 50% shorter than the unmoderated ones.

These findings have important implications for AI fairness and digital equity, as users in different locations receive inconsistent access to information. We provide the first systematic evidence of geographic cross-language bias in LLM content moderation and showcase how model selection vastly impacts user experience.

View More Papers

Towards Effective Prompt Stealing Attack against Text-to-Image Diffusion Models

Shiqian Zhao (Nanyang Technological University), Chong Wang (Nanyang Technological University), Yiming Li (Nanyang Technological University), Yihao Huang (NUS), Wenjie Qu (NUS), Siew-Kei Lam (Nanyang Technological University), Yi Xie (Tsinghua University), Kangjie Chen (Nanyang Technological University), Jie Zhang (CFAR and IHPC, A*STAR, Singapore), Tianwei Zhang (Nanyang Technological University)

Read More

Local LLMs for NL2Bash: A Large-Scale Open-Source Model Evaluation...

Jef Jacobs (DistriNet, KU Leuven), Jorn Lapon (DistriNet, KU Leuven), Vincent Naessens (DistriNet, KU Leuven)

Read More

Attention is All You Need to Defend Against Indirect...

Yinan Zhong (Zhejiang University), Qianhao Miao (Zhejiang University), Yanjiao Chen (Zhejiang University), Jiangyi Deng (Zhejiang University), Yushi Cheng (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More