Peizhuo Lv (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Pan Li (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Shenchen Zhu (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Shengzhi Zhang (Department of Computer Science, Metropolitan College, Boston University, USA), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Ruigang Liang (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Chang Yue (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Fang Xiang (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Yuling Cai (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Hualong Ma (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Yingjun Zhang (Institute of Software, Chinese Academy of Sciences, China), Guozhu Meng (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China)

Recent years have witnessed tremendous success in Self-Supervised Learning (SSL), which has been widely utilized to facilitate various downstream tasks in Computer Vision (CV) and Natural Language Processing (NLP) domains. However, attackers may steal such SSL models and commercialize them for profit, making it crucial to verify the ownership of the SSL models. Most existing ownership protection solutions (e.g., backdoor-based watermarks) are designed for supervised learning models and cannot be used directly since they require that the models' downstream tasks and target labels be known and available during watermark embedding, which is not always possible in the domain of SSL. To address such a problem, especially when downstream tasks are diverse and unknown during watermark embedding, we propose a novel black-box watermarking solution, named SSL-WM, for verifying the ownership of SSL models. SSL-WM maps watermarked inputs of the protected encoders into an invariant representation space, which causes any downstream classifier to produce expected behavior, thus allowing the detection of embedded watermarks. We evaluate SSL-WM on numerous tasks, such as CV and NLP, using different SSL models both contrastive-based and generative-based. Experimental results demonstrate that SSL-WM can effectively verify the ownership of stolen SSL models in various downstream tasks. Furthermore, SSL-WM is robust against model fine-tuning, pruning, and input preprocessing attacks. Lastly, SSL-WM can also evade detection from evaluated watermark detection approaches, demonstrating its promising application in protecting the ownership of SSL models.

View More Papers

Enhance Stealthiness and Transferability of Adversarial Attacks with Class...

Hui Xia (Ocean University of China), Rui Zhang (Ocean University of China), Zi Kang (Ocean University of China), Shuliang Jiang (Ocean University of China), Shuo Xu (Ocean University of China)

Read More

MOCK: Optimizing Kernel Fuzzing Mutation with Context-aware Dependency

Jiacheng Xu (Zhejiang University), Xuhong Zhang (Zhejiang University), Shouling Ji (Zhejiang University), Yuan Tian (UCLA), Binbin Zhao (Georgia Institute of Technology), Qinying Wang (Zhejiang University), Peng Cheng (Zhejiang University), Jiming Chen (Zhejiang University)

Read More

QUACK: Hindering Deserialization Attacks via Static Duck Typing

Yaniv David (Columbia University), Neophytos Christou (Brown University), Andreas D. Kellas (Columbia University), Vasileios P. Kemerlis (Brown University), Junfeng Yang (Columbia University)

Read More

Scrappy: SeCure Rate Assuring Protocol with PrivacY

Kosei Akama (Keio University), Yoshimichi Nakatsuka (ETH Zurich), Masaaki Sato (Tokai University), Keisuke Uehara (Keio University)

Read More