Alessandro Galeazzi (University of Padua), Pujan Paudel (Boston University), Mauro Conti (University of Padua), Emiliano De Cristofaro (UC Riverside), Gianluca Stringhini (Boston University)

In recent years, the opaque design and the limited public understanding of social networks' recommendation algorithms have raised concerns about potential manipulation of information exposure. Reducing content visibility, aka shadow banning, may help limit harmful content; however, it can also be used to suppress dissenting voices. This prompts the need for greater transparency and a better understanding of this practice.

In this paper, we investigate the presence of visibility alterations through a large-scale quantitative analysis of two Twitter/X datasets comprising over 40 million tweets from more than 9 million users, focused on discussions surrounding the Ukraine–Russia conflict and the 2024 US Presidential Elections. We use view counts to detect patterns of reduced or inflated visibility and examine how these correlate with user opinions, social roles, and narrative framings. Our analysis shows that the algorithm systematically penalizes tweets containing links to external resources, reducing their visibility by up to a factor of eight, regardless of the ideological stance or source reliability. Rather, content visibility may be penalized or favored depending on the specific accounts producing it, as observed when comparing tweets from the Kyiv Independent and RT.com or tweets by Donald Trump and Kamala Harris. Overall, our work highlights the importance of transparency in content moderation and recommendation systems to protect the integrity of public discourse and ensure equitable access to online platforms.

View More Papers

Beyond Raw Bytes: Towards Large Malware Language Models

Luke Kurlandski (Rochester Institute of Technology), Harel Berger (Ariel University), Yin Pan (Rochester Institute of Technology), Matthew Wright (Rochester Institute of Technology)

Read More

RoundRole: Unlocking the Efficiency of Multi-party Computation with Bandwidth-aware...

xiaoyu fan (IIIS, Tsinghua University), Kun Chen (Ant Group), Jiping Yu (Tsinghua University), Xin Liu (Tsinghua University), Yunyi Chen (Tsinghua University), Wei Xu (Tsinghua Univesity)

Read More

In-Context Probing for Membership Inference in Fine-Tuned Language Models

Zhexi Lu (Rensselaer Polytechnic Institute), Hongliang Chi (Rensselaer Polytechnic Institute), Nathalie Baracaldo (IBM Research - Almaden), Swanand Ravindra Kadhe (IBM Research - Almaden), Yuseok Jeon (Korea University), Lei Yu (Rensselaer Polytechnic Institute)

Read More