Haotian Zhu (Nanjing University of Science and Technology), Shuchao Pang (Nanjing University of Science and Technology), Zhigang Lu (Western Sydney University), Yongbin Zhou (Nanjing University of Science and Technology), Minhui Xue (CSIRO's Data61)

Text-to-image diffusion model's fine-tuning technology allows people to easily generate a large number of customized photos using limited identity images. Although this technology is easy to use, its misuse could lead to violations of personal portraits and privacy, with false information and harmful content potentially causing further harm to individuals. Several methods have been proposed to protect faces from customization via adding protective noise to user images by disrupting the fine-tuned models.

Unfortunately, simple pre-processing techniques like JPEG compression, a normal pre-processing operation performed by modern social networks, can easily erase the protective effects of existing methods. To counter JPEG compression and other potential pre-processing, we propose GAP-Diff, a framework of underline{G}enerating data with underline{A}dversarial underline{P}erturbations for text-to-image underline{Diff}usion models using unsupervised learning-based optimization, including three functional modules. Specifically, our framework learns robust representations against JPEG compression by backpropagating gradient information through a pre-processing simulation module while learning adversarial characteristics for disrupting fine-tuned text-to-image diffusion models. Furthermore, we achieve an adversarial mapping from clean images to protected images by designing adversarial losses against these fine-tuning methods and JPEG compression, with stronger protective noises within milliseconds. Facial benchmark experiments, compared to state-of-the-art protective methods, demonstrate that GAP-Diff significantly enhances the resistance of protective noise to JPEG compression, thereby better safeguarding user privacy and copyrights in the digital world.

View More Papers

The Discriminative Power of Cross-layer RTTs in Fingerprinting Proxy...

Diwen Xue (University of Michigan), Robert Stanley (University of Michigan), Piyush Kumar (University of Michigan), Roya Ensafi (University of Michigan)

Read More

VulShield: Protecting Vulnerable Code Before Deploying Patches

Yuan Li (Zhongguancun Laboratory & Tsinghua University), Chao Zhang (Tsinghua University & JCSS & Zhongguancun Laboratory), Jinhao Zhu (UC Berkeley), Penghui Li (Zhongguancun Laboratory), Chenyang Li (Peking University), Songtao Yang (Zhongguancun Laboratory), Wende Tan (Tsinghua University)

Read More

HADES Attack: Understanding and Evaluating Manipulation Risks of Email...

Ruixuan Li (Tsinghua University), Chaoyi Lu (Tsinghua University), Baojun Liu (Tsinghua University;Zhongguancun Laboratory), Yunyi Zhang (Tsinghua University), Geng Hong (Fudan University), Haixin Duan (Tsinghua University;Zhongguancun Laboratory), Yanzhong Lin (Coremail Technology Co. Ltd), Qingfeng Pan (Coremail Technology Co. Ltd), Min Yang (Fudan University), Jun Shao (Zhejiang Gongshang University)

Read More

cozy: Comparative Symbolic Execution for Binary Programs

Caleb Helbling, Graham Leach-Krouse, Sam Lasser, Greg Sullivan (Draper)

Read More