Ruyi Ding (Northeastern University), Tong Zhou (Northeastern University), Lili Su (Northeastern University), Aidong Adam Ding (Northeastern University), Xiaolin Xu (Northeastern University), Yunsi Fei (Northeastern University)

Adapting pre-trained deep learning models to customized tasks has become a popular choice for developers to cope with limited computational resources and data volume. More specifically, probing--training a classifier on a pre-trained encoder--has been widely adopted in transfer learning, which helps to prevent overfitting and catastrophic forgetting. However, such generalizability of pre-trained encoders raises concerns about the potential misuse of probing for harmful applications, such as discriminatory speculation and warfare applications. In this work, we introduce EncoderLock, a novel applicability authorization method designed to protect pre-trained encoders from malicious probing, i.e., yielding poor performance on specified prohibited domains while maintaining their utility in authorized ones. Achieving this balance is challenging because of the opposite optimization objectives and the variety of downstream heads that adversaries can utilize adaptively. To address these challenges, EncoderLock employs two techniques: domain-aware weight selection and updating to restrict applications on prohibited domains/tasks, and self-challenging training scheme that iteratively strengthens resistance against any potential downstream classifiers that adversaries may apply. Moreover, recognizing the potential lack of data from prohibited domains in practical scenarios, we introduce three EncoderLock variants with different levels of data accessibility: supervised (prohibited domain data with labels), unsupervised (prohibited domain data without labels), and zero-shot (no data or labels available). Extensive experiments across fifteen domains and three model architectures demonstrate EncoderLock's effectiveness over baseline methods using non-transferable learning. Additionally, we verify EncoderLock's effectiveness and practicality with a real-world pre-trained Vision Transformer (ViT) encoder from Facebook. These results underscore the valuable contributions EncoderLock brings to the development of responsible AI.

View More Papers

Careful About What App Promotion Ads Recommend! Detecting and...

Shang Ma (University of Notre Dame), Chaoran Chen (University of Notre Dame), Shao Yang (Case Western Reserve University), Shifu Hou (University of Notre Dame), Toby Jia-Jun Li (University of Notre Dame), Xusheng Xiao (Arizona State University), Tao Xie (Peking University), Yanfang Ye (University of Notre Dame)

Read More

QMSan: Efficiently Detecting Uninitialized Memory Errors During Fuzzing

Matteo Marini (Sapienza University of Rome), Daniele Cono D'Elia (Sapienza University of Rome), Mathias Payer (EPFL), Leonardo Querzoni (Sapienza University of Rome)

Read More

Dissecting Payload-based Transaction Phishing on Ethereum

Zhuo Chen (Zhejiang University), Yufeng Hu (Zhejiang University), Bowen He (Zhejiang University), Dong Luo (Zhejiang University), Lei Wu (Zhejiang University), Yajin Zhou (Zhejiang University)

Read More

You Can Rand but You Can't Hide: A Holistic...

Inon Kaplan (Independent researcher), Ron even (Independent researcher), Amit Klein (The Hebrew University of Jerusalem, Israel)

Read More