Saisai Xia (State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and School of Cyber Security, University of Chinese Academy of Sciences), Wenhao Wang (State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and School of Cyber Security, University of Chinese Academy of Sciences), Zihao Wang (Nanyang Technological University), Yuhui Zhang (State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and School of Cyber Security, University of Chinese Academy of Sciences), Yier Jin (University of Science and Technology of China), Dan Meng (State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and School of Cyber Security, University of Chinese Academy of Sciences), Rui Hou (State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and School of Cyber Security, University of Chinese Academy of Sciences)

Publicly available large pretrained models (i.e., backbones) and lightweight adapters for parameter-efficient finetuning (PEFT) have become standard components in modern machine learning pipelines. However, preserving the privacy of both user inputs and fine-tuned adapters—often trained on sensitive data—during inference remains a significant challenge. Applying cryptographic techniques, such as multi-party computation (MPC), to PEFT settings still incurs substantial encrypted computation across both the backbone and adapter, mainly due to the inherent two-way communication between them. To address this limitation, we propose CRYPTPEFT, the first PEFT solution specifically designed for private inference scenarios. CRYPTPEFT introduces a novel one-way communication (OWC) architecture that confines encrypted computation solely to the adapter, significantly reducing both computational and communication overhead. To maintain strong model utility under this constraint, we explore the design space of OWC-compatible adapters and employ an automated architecture search algorithm to optimize the trade-off between private inference efficiency and model utility. We evaluated CRYPTPEFT using Vision Transformer backbones across widely used image classification datasets. Our results show that CRYPTPEFT significantly outperforms existing baselines, delivering speedups ranging from 20.62× to 291.48× in simulated wide-area network (WAN) and local-area network (LAN) settings. On CIFAR-100, CRYPTPEFT attains 85.47% accuracy with just 2.26 seconds of inference latency. These findings demonstrate that CRYPTPEFT offers an efficient and privacy-preserving solution for modern PEFT-based inference.

View More Papers

BunnyFinder: Finding Incentive Flaws for Ethereum Consensus

Rujia Li (Tsinghua University and State Key Laboratory of Cryptography and Digital Economy Security), Mingfei Zhang (Shandong University), Xueqian Lu (Independent Reseacher), Wenbo Xu (Blockchain Platform Division, Ant Group), Ying Yan (Blockchain Platform Division, Ant Group), Sisi Duan (Tsinghua University, Zhongguancun Laboratory, Shandong Institute of Blockchains and State Key Laboratory of Cryptography and Digital Economy…

Read More

LighTellite: Reinforcement Learning-Based Framework for Energy Efficient Onboard Satellite...

Aviel Ben Siman Tov (Ben Gurion University of the Negev), Edita Grolman (Ben Gurion University of the Negev), Yuval Elovici (Ben Gurion University of the Negev), Asaf Shabtai (Ben Gurion University of the Negev)

Read More

Revisiting Differentially Private Hyper-parameter Tuning

Zihang Xiang (KAUST), Tianhao Wang (University of Virginia), Cheng-Long Wang (KAUST), Di Wang (KAUST)

Read More