Haotian Zhu (Nanjing University of Science and Technology), Shuchao Pang (Nanjing University of Science and Technology), Zhigang Lu (Western Sydney University), Yongbin Zhou (Nanjing University of Science and Technology), Minhui Xue (CSIRO's Data61)

Text-to-image diffusion model's fine-tuning technology allows people to easily generate a large number of customized photos using limited identity images. Although this technology is easy to use, its misuse could lead to violations of personal portraits and privacy, with false information and harmful content potentially causing further harm to individuals. Several methods have been proposed to protect faces from customization via adding protective noise to user images by disrupting the fine-tuned models.

Unfortunately, simple pre-processing techniques like JPEG compression, a normal pre-processing operation performed by modern social networks, can easily erase the protective effects of existing methods. To counter JPEG compression and other potential pre-processing, we propose GAP-Diff, a framework of underline{G}enerating data with underline{A}dversarial underline{P}erturbations for text-to-image underline{Diff}usion models using unsupervised learning-based optimization, including three functional modules. Specifically, our framework learns robust representations against JPEG compression by backpropagating gradient information through a pre-processing simulation module while learning adversarial characteristics for disrupting fine-tuned text-to-image diffusion models. Furthermore, we achieve an adversarial mapping from clean images to protected images by designing adversarial losses against these fine-tuning methods and JPEG compression, with stronger protective noises within milliseconds. Facial benchmark experiments, compared to state-of-the-art protective methods, demonstrate that GAP-Diff significantly enhances the resistance of protective noise to JPEG compression, thereby better safeguarding user privacy and copyrights in the digital world.

View More Papers

Work-in-Progress: Uncovering Dark Patterns: A Longitudinal Study of Cookie...

Zihan Qu (Johns Hopkins University), Xinyi Qu (University College London), Xin Shen, Zhen Liang, and Jianjia Yu (Johns Hopkins University)

Read More

CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models

Rui Zeng (Zhejiang University), Xi Chen (Zhejiang University), Yuwen Pu (Zhejiang University), Xuhong Zhang (Zhejiang University), Tianyu Du (Zhejiang University), Shouling Ji (Zhejiang University)

Read More

Diffence: Fencing Membership Privacy With Diffusion Models

Yuefeng Peng (University of Massachusetts Amherst), Ali Naseh (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

Read More

dAngr: Lifting Software Debugging to a Symbolic Level

Dairo de Ruck, Jef Jacobs, Jorn Lapon, Vincent Naessens (DistriNet, KU Leuven, 3001 Leuven, Belgium)

Read More