Saisai Xia (State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and School of Cyber Security, University of Chinese Academy of Sciences), Wenhao Wang (State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and School of Cyber Security, University of Chinese Academy of Sciences), Zihao Wang (Nanyang Technological University), Yuhui Zhang (State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and School of Cyber Security, University of Chinese Academy of Sciences), Yier Jin (University of Science and Technology of China), Dan Meng (State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and School of Cyber Security, University of Chinese Academy of Sciences), Rui Hou (State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and School of Cyber Security, University of Chinese Academy of Sciences)

Publicly available large pretrained models (i.e., backbones) and lightweight adapters for parameter-efficient finetuning (PEFT) have become standard components in modern machine learning pipelines. However, preserving the privacy of both user inputs and fine-tuned adapters—often trained on sensitive data—during inference remains a significant challenge. Applying cryptographic techniques, such as multi-party computation (MPC), to PEFT settings still incurs substantial encrypted computation across both the backbone and adapter, mainly due to the inherent two-way communication between them. To address this limitation, we propose CRYPTPEFT, the first PEFT solution specifically designed for private inference scenarios. CRYPTPEFT introduces a novel one-way communication (OWC) architecture that confines encrypted computation solely to the adapter, significantly reducing both computational and communication overhead. To maintain strong model utility under this constraint, we explore the design space of OWC-compatible adapters and employ an automated architecture search algorithm to optimize the trade-off between private inference efficiency and model utility. We evaluated CRYPTPEFT using Vision Transformer backbones across widely used image classification datasets. Our results show that CRYPTPEFT significantly outperforms existing baselines, delivering speedups ranging from 20.62× to 291.48× in simulated wide-area network (WAN) and local-area network (LAN) settings. On CIFAR-100, CRYPTPEFT attains 85.47% accuracy with just 2.26 seconds of inference latency. These findings demonstrate that CRYPTPEFT offers an efficient and privacy-preserving solution for modern PEFT-based inference.

View More Papers

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes...

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)

Read More

Work-in-progress: Deobfuscating Academic Email Addresses: A Security Evaluation of...

Ron Amsalem (Ariel University), Harel Berger (Ariel University)

Read More

U.S. Election Expert Perspectives on End-to-end Verifiable Voting Systems

Julie M. Haney (National Institute of Standards and Technology, Gaithersburg, Maryland), Shanee Dawkins (National Institute of Standards and Technology, Gaithersburg, Maryland), Sandra Spickard Prettyman (Cultural Catalyst LLC, Chicago), Mary F. Theofanos (National Institute of Standards and Technology, Gaithersburg, Maryland), Kristen K. Greene (National Institute of Standards and Technology, Gaithersburg, Maryland), Kristin L. Kelly Koskey (Cultural Catalyst LLC, Chicago), Jody L. Jacobs (National Institute of Standards…

Read More