Peizhuo Lv (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Pan Li (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Shenchen Zhu (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Shengzhi Zhang (Department of Computer Science, Metropolitan College, Boston University, USA), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Ruigang Liang (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Chang Yue (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Fang Xiang (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Yuling Cai (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Hualong Ma (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Yingjun Zhang (Institute of Software, Chinese Academy of Sciences, China), Guozhu Meng (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China)

Recent years have witnessed tremendous success in Self-Supervised Learning (SSL), which has been widely utilized to facilitate various downstream tasks in Computer Vision (CV) and Natural Language Processing (NLP) domains. However, attackers may steal such SSL models and commercialize them for profit, making it crucial to verify the ownership of the SSL models. Most existing ownership protection solutions (e.g., backdoor-based watermarks) are designed for supervised learning models and cannot be used directly since they require that the models' downstream tasks and target labels be known and available during watermark embedding, which is not always possible in the domain of SSL. To address such a problem, especially when downstream tasks are diverse and unknown during watermark embedding, we propose a novel black-box watermarking solution, named SSL-WM, for verifying the ownership of SSL models. SSL-WM maps watermarked inputs of the protected encoders into an invariant representation space, which causes any downstream classifier to produce expected behavior, thus allowing the detection of embedded watermarks. We evaluate SSL-WM on numerous tasks, such as CV and NLP, using different SSL models both contrastive-based and generative-based. Experimental results demonstrate that SSL-WM can effectively verify the ownership of stolen SSL models in various downstream tasks. Furthermore, SSL-WM is robust against model fine-tuning, pruning, and input preprocessing attacks. Lastly, SSL-WM can also evade detection from evaluated watermark detection approaches, demonstrating its promising application in protecting the ownership of SSL models.

View More Papers

Proof of Backhaul: Trustfree Measurement of Broadband Bandwidth

Peiyao Sheng (Kaleidoscope Blockchain Inc.), Nikita Yadav (Indian Institute of Science), Vishal Sevani (Kaleidoscope Blockchain Inc.), Arun Babu (Kaleidoscope Blockchain Inc.), Anand Svr (Kaleidoscope Blockchain Inc.), Himanshu Tyagi (Indian Institute of Science), Pramod Viswanath (Kaleidoscope Blockchain Inc.)

Read More

Private Aggregate Queries to Untrusted Databases

Syed Mahbub Hafiz (University of California, Davis), Chitrabhanu Gupta (University of California, Davis), Warren Wnuck (University of California, Davis), Brijesh Vora (University of California, Davis), Chen-Nee Chuah (University of California, Davis)

Read More

ORL-AUDITOR: Dataset Auditing in Offline Deep Reinforcement Learning

Linkang Du (Zhejiang University), Min Chen (CISPA Helmholtz Center for Information Security), Mingyang Sun (Zhejiang University), Shouling Ji (Zhejiang University), Peng Cheng (Zhejiang University), Jiming Chen (Zhejiang University), Zhikun Zhang (CISPA Helmholtz Center for Information Security and Stanford University)

Read More

BreakSPF: How Shared Infrastructures Magnify SPF Vulnerabilities Across the...

Chuhan Wang (Tsinghua University), Yasuhiro Kuranaga (Tsinghua University), Yihang Wang (Tsinghua University), Mingming Zhang (Zhongguancun Laboratory), Linkai Zheng (Tsinghua University), Xiang Li (Tsinghua University), Jianjun Chen (Tsinghua University; Zhongguancun Laboratory), Haixin Duan (Tsinghua University; Quan Cheng Lab; Zhongguancun Laboratory), Yanzhong Lin (Coremail Technology Co. Ltd), Qingfeng Pan (Coremail Technology Co. Ltd)

Read More