Yong Zhuang (Wuhan University), Keyan Guo (University at Buffalo), Juan Wang (Wuhan University), Yiheng Jing (Wuhan University), Xiaoyang Xu (Wuhan University), Wenzhe Yi (Wuhan University), Mengda Yang (Wuhan University), Bo Zhao (Wuhan University), Hongxin Hu (University at Buffalo)

Memes have become a double-edged sword on social media platforms. On one hand, they facilitate the rapid dissemination of information and enhance communication. On the other hand, memes pose a risk of spreading harmful content under the guise of humor and virality. This duality highlights the need to develop effective moderation tools capable of identifying harmful memes. Current detection methods, however, face significant challenges in identifying harmful memes due to their inherent complexity. This complexity arises from the diverse forms of expression, intricate compositions, sophisticated propaganda techniques, and varied cultural contexts in which memes are created and circulated. These factors make it difficult for existing algorithms to distinguish between harmless and harmful content accurately. To understand and address these challenges, we first conduct a comprehensive study on harmful memes from two novel perspectives: visual arts and propaganda techniques. It aims to assess existing tools for detecting harmful memes and understand the complexities inherent in them. Our findings demonstrate that meme compositions and propaganda techniques can significantly diminish the effectiveness of current harmful meme detection methods. Inspired by our observations and understanding of harmful memes, we propose a novel framework called HMGUARD for effective detection of harmful memes. HMGUARD utilizes adaptive prompting and chain-of-thought (CoT) reasoning in multimodal large language models (MLLMs). HMGUARD has demonstrated remarkable performance on the public harmful meme dataset, achieving an accuracy of 0.92. Compared to the baseline, HMGUARD represents a substantial improvement, with accuracy exceeding the baselines by 15% to 79.17%. Additionally, HMGUARD outperforms existing detection tools, achieving an impressive accuracy of 0.88 in real-world scenarios.

View More Papers

Unleashing the Power of Generative Model in Recovering Variable...

Xiangzhe Xu (Purdue University), Zhuo Zhang (Purdue University), Zian Su (Purdue University), Ziyang Huang (Purdue University), Shiwei Feng (Purdue University), Yapeng Ye (Purdue University), Nan Jiang (Purdue University), Danning Xie (Purdue University), Siyuan Cheng (Purdue University), Lin Tan (Purdue University), Xiangyu Zhang (Purdue University)

Read More

A New PPML Paradigm for Quantized Models

Tianpei Lu (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Bingsheng Zhang (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Xiaoyuan Zhang (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Kui Ren (The State Key Laboratory of Blockchain and Data Security, Zhejiang University)

Read More

YuraScanner: Leveraging LLMs for Task-driven Web App Scanning

Aleksei Stafeev (CISPA Helmholtz Center for Information Security), Tim Recktenwald (CISPA Helmholtz Center for Information Security), Gianluca De Stefano (CISPA Helmholtz Center for Information Security), Soheil Khodayari (CISPA Helmholtz Center for Information Security), Giancarlo Pellegrino (CISPA Helmholtz Center for Information Security)

Read More