Yan Pang (University of Virginia), Tianhao Wang (University of Virginia)

With the rapid advancement of diffusion-based image-generative models, the quality of generated images has become increasingly photorealistic. Moreover, with the release of high-quality pre-trained image-generative models, a growing number of users are downloading these pre-trained models to fine-tune them with downstream datasets for various image-generation tasks. However, employing such powerful pre-trained models in downstream tasks presents significant privacy leakage risks. In this paper, we propose the first scores-based membership inference attack framework tailored for recent diffusion models, and in the more stringent black-box access setting. Considering four distinct attack scenarios and three types of attacks, this framework is capable of targeting any popular conditional generator model, achieving high precision, evidenced by an impressive AUC of 0.95.

View More Papers

BARBIE: Robust Backdoor Detection Based on Latent Separability

Hanlei Zhang (Zhejiang University), Yijie Bai (Zhejiang University), Yanjiao Chen (Zhejiang University), Zhongming Ma (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More

Defending Against Membership Inference Attacks on Iteratively Pruned Deep...

Jing Shang (Beijing Jiaotong University), Jian Wang (Beijing Jiaotong University), Kailun Wang (Beijing Jiaotong University), Jiqiang Liu (Beijing Jiaotong University), Nan Jiang (Beijing University of Technology), Md Armanuzzaman (Northeastern University), Ziming Zhao (Northeastern University)

Read More

PhantomLiDAR: Cross-modality Signal Injection Attacks against LiDAR

Zizhi Jin (Zhejiang University), Qinhong Jiang (Zhejiang University), Xuancun Lu (Zhejiang University), Chen Yan (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More