Yan Pang (University of Virginia), Tianhao Wang (University of Virginia)

With the rapid advancement of diffusion-based image-generative models, the quality of generated images has become increasingly photorealistic. Moreover, with the release of high-quality pre-trained image-generative models, a growing number of users are downloading these pre-trained models to fine-tune them with downstream datasets for various image-generation tasks. However, employing such powerful pre-trained models in downstream tasks presents significant privacy leakage risks. In this paper, we propose the first scores-based membership inference attack framework tailored for recent diffusion models, and in the more stringent black-box access setting. Considering four distinct attack scenarios and three types of attacks, this framework is capable of targeting any popular conditional generator model, achieving high precision, evidenced by an impressive AUC of 0.95.

View More Papers

TZ-DATASHIELD: Automated Data Protection for Embedded Systems via Data-Flow-Based...

Zelun Kong (University of Texas at Dallas), Minkyung Park (University of Texas at Dallas), Le Guan (University of Georgia), Ning Zhang (Washington University in St. Louis), Chung Hwan Kim (University of Texas at Dallas)

Read More

LeoCommon – A Ground Station Observatory Network for LEO...

Eric Jedermann, Martin Böh (University of Kaiserslautern), Martin Strohmeier (armasuisse Science & Technology), Vincent Lenders (Cyber-Defence Campus, armasuisse Science & Technology), Jens Schmitt (University of Kaiserslautern)

Read More

Blindfold: Confidential Memory Management by Untrusted Operating System

Caihua Li (Yale University), Seung-seob Lee (Yale University), Lin Zhong (Yale University)

Read More