Anna Ablove (University of Michigan), Shreyas Chandrashekaran (University of Michigan), Xiao Qiang (University of California at Berkeley), Roya Ensafi (University of Michigan)

From network-level censorship by the Great Firewall to platform-specific mechanisms implemented by third-party services like TOM-Skype and WeChat, Internet censorship in China has continually evolved in response to new technologies. In the current era of AI, emerging tools like Large Language Models (LLMs) are no exception. Yet, ensuring compliance with China’s strict, legally mandated censorship standards presents a unique and complex challenge for service providers. While current research on content moderation in LLMs is primarily focused on alignment techniques, their lack of reliability prevents sufficient compliance with strictly enforced information controls.

In this work, we present the first study of overt blocking embedded in Chinese LLM services. We leverage information leaks in the communication between the server and client during active chat sessions and aim to extract where blocking decisions are embedded within the LLM services' workflow. We observe a persistent reliance on traditional, dated blocking strategies in prominent services: Baidu-Chat, DeepSeek, Doubao, Kimi, and Qwen. We find blocking placements during the input, output, and search phases, with the latter two leaking varying amounts of censored information to client machines, including near-complete responses and search references not rendered in the browser.

Seeing the need to balance competition on the global stage with homegrown censorship restrictions, we observe in real time the concessions made by service providers hosting models at war with themselves. Through this work, we emphasize the importance of a more holistic threat model of LLM content accessibility, integrating live deployments to study access as it pertains to real world usage, especially in heavily censored regions.

View More Papers

Revisiting Differentially Private Hyper-parameter Tuning

Zihang Xiang (KAUST), Tianhao Wang (University of Virginia), Cheng-Long Wang (KAUST), Di Wang (KAUST)

Read More

Token Time Bomb: Evaluating JWT Implementations for Vulnerability Discovery

Jingcheng Yang (Tsinghua University), Enze Wang (Tsinghua University and National University of Defense Technology), Jianjun Chen (Tsinghua University), Qi Wang (Tsinghua University), Yuheng Zhang (Tsinghua University), Haixin Duan (Tsinghua University), Wei Xie (National University of Defense Technology), Baosheng Wang (National University of Defense Technology)

Read More

ropbot: Reimaging Code Reuse Attack Synthesis

Kyle Zeng (Arizona State University), Moritz Schloegel (CISPA Helmholtz Center for Information Security), Christopher Salls (UC Santa Barbara), Adam Doupé (Arizona State University), Ruoyu Wang (Arizona State University), Yan Shoshitaishvili (Arizona State University), Tiffany Bao (Arizona State University)

Read More