Ge Ren (Shanghai Jiao Tong University), Gaolei Li (Shanghai Jiao Tong University), Shenghong Li (Shanghai Jiao Tong University), Libo Chen (Shanghai Jiao Tong University), Kui Ren (Zhejiang University)

Well-trained deep neural network (DNN) models can be treated as commodities for commercial transactions and generate significant revenues, raising the urgent need for intellectual property (IP) protection against illegitimate reproducing. Emerging studies on IP protection often aim at inserting watermarks into DNNs, allowing owners to passively verify the ownership of target models after counterfeit models appear and commercial benefits are infringed, while active authentication against unauthorized queries of DNN-based applications is still neglected. In this paper, we propose a novel approach to protect model intellectual property, called ActiveDaemon, which incorporates a built-in access control function in DNNs to safeguard against commercial piracy. Specifically, our approach enables DNNs to predict correct outputs only for authorized users with user-specific tokens while producing poor accuracy for unauthorized users. In ActiveDaemon, the user-specific tokens are generated by a specially designed U-Net style encoder-decoder network, which can map strings and input images into numerous noise images to address identity management with large-scale user capacity. Compared to existing studies, these user-specific tokens are invisible, dynamic and more perceptually concealed, enhancing the stealthiness and reliability of model IP protection. To automatically wake up the model accuracy, we utilize the data poisoning-based training technique to unconsciously embed the ActiveDaemon into the neuron's function. We conduct experiments to compare the protection performance of ActiveDaemon with four state-of-the-art approaches over four datasets. The experimental results show that ActiveDaemon can reduce the accuracy of unauthorized queries by as much as 81% with less than a 1.4% decrease in that of authorized queries. Meanwhile, our approach can also reduce the LPIPS scores of the authorized tokens to 0.0027 on CIFAR10 and 0.0368 on ImageNet.

View More Papers

Designing and Evaluating a Testbed for the Matter Protocol:...

Ravindra Mangar (Dartmouth College) Jingyu Qian (University of Illinois), Wondimu Zegeye (Morgan State University), Abdulrahman AlRabah, Ben Civjan, Shalni Sundram, Sam Yuan, Carl A. Gunter (University of Illinois), Mounib Khanafer (American University of Kuwait), Kevin Kornegay (Morgan State University), Timothy J. Pierson, David Kotz (Dartmouth College)

Read More

WIP: Towards Practical LiDAR Spoofing Attack against Vehicles Driving...

Ryo Suzuki (Keio University), Takami Sato (University of California, Irvine), Yuki Hayakawa, Kazuma Ikeda, Ozora Sako, Rokuto Nagata (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

Read More

LARMix: Latency-Aware Routing in Mix Networks

Mahdi Rahimi (KU Leuven), Piyush Kumar Sharma (KU Leuven), Claudia Diaz (KU Leuven)

Read More

WIP: Auditing Artist Style Pirate in Text-to-image Generation Models

Linkang Du (Zhejiang University), Zheng Zhu (Zhejiang University), Min Chen (CISPA Helmholtz Center for Information Security), Shouling Ji (Zhejiang University), Peng Cheng (Zhejiang University), Jiming Chen (Zhejiang University), Zhikun Zhang (Stanford University)

Read More