Peizhuo Lv (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Pan Li (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Shenchen Zhu (Institute of Information Engineering, Chinese Academy of Sciences, China;…

Recent years have witnessed tremendous success in Self-Supervised Learning (SSL), which has been widely utilized to facilitate various downstream tasks in Computer Vision (CV) and Natural Language Processing (NLP) domains. However, attackers may steal such SSL models and commercialize them for profit, making it crucial to verify the ownership of the SSL models. Most existing ownership protection solutions (e.g., backdoor-based watermarks) are designed for supervised learning models and cannot be used directly since they require that the models' downstream tasks and target labels be known and available during watermark embedding, which is not always possible in the domain of SSL. To address such a problem, especially when downstream tasks are diverse and unknown during watermark embedding, we propose a novel black-box watermarking solution, named SSL-WM, for verifying the ownership of SSL models. SSL-WM maps watermarked inputs of the protected encoders into an invariant representation space, which causes any downstream classifier to produce expected behavior, thus allowing the detection of embedded watermarks. We evaluate SSL-WM on numerous tasks, such as CV and NLP, using different SSL models both contrastive-based and generative-based. Experimental results demonstrate that SSL-WM can effectively verify the ownership of stolen SSL models in various downstream tasks. Furthermore, SSL-WM is robust against model fine-tuning, pruning, and input preprocessing attacks. Lastly, SSL-WM can also evade detection from evaluated watermark detection approaches, demonstrating its promising application in protecting the ownership of SSL models.

View More Papers

Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering

Rui Zhu (Indiana University Bloominton), Di Tang (Indiana University Bloomington), Siyuan Tang (Indiana University Bloomington), Zihao Wang (Indiana University Bloomington), Guanhong Tao (Purdue University), Shiqing Ma (University of Massachusetts Amherst), XiaoFeng Wang (Indiana University Bloomington), Haixu Tang (Indiana University, Bloomington)

Read More

MirageFlow: A New Bandwidth Inflation Attack on Tor

Christoph Sendner (University of Würzburg), Jasper Stang (University of Würzburg), Alexandra Dmitrienko (University of Würzburg), Raveen Wijewickrama (University of Texas at San Antonio), Murtuza Jadliwala (University of Texas at San Antonio)

Read More

Parrot-Trained Adversarial Examples: Pushing the Practicality of Black-Box Audio...

Rui Duan (University of South Florida), Zhe Qu (Central South University), Leah Ding (American University), Yao Liu (University of South Florida), Zhuo Lu (University of South Florida)

Read More

MPCDiff: Testing and Repairing MPC-Hardened Deep Learning Models

Qi Pang (Carnegie Mellon University), Yuanyuan Yuan (HKUST), Shuai Wang (HKUST)

Read More