Tianpei Lu (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Bingsheng Zhang (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Xiaoyuan Zhang (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Kui Ren (The State Key Laboratory of Blockchain and Data Security, Zhejiang University)

Model quantization has become a common practice in machine learning (ML) to improve efficiency and reduce computational/communicational overhead. However, adopting quantization in privacy-preserving machine learning (PPML) remains challenging due to the complex internal structure of quantized operators, which leads to inefficient protocols under the existing PPML frameworks.

In this work, we propose a new PPML paradigm that is tailor-made for and can benefit from quantized models. Our main observation is that look-up tables can ignore the complex internal constructs of any functions which can be used to simplify the quantized operator evaluation. We view the model inference process as a sequence of quantized operators, and each operator is implemented by a look-up table. We then develop an efficient private look-up table evaluation protocol, and its online communication cost is only $log n$, where $n$ is the size of the look-up table.
On a single CPU core, our protocol can evaluate $2^{26}$ tables with 8-bit input and 8-bit output per second.

The resulting PPML framework for quantized models offers extremely fast online performance.
The experimental results demonstrate that our quantization strategy achieves substantial speedups over SOTA PPML solutions, improving the online performance by $40sim 60 times$ w.r.t. convolutional neural network (CNN) models, such as AlexNet, VGG16, and ResNet18, and by $10sim 25 times$ w.r.t. large language models (LLMs), such as GPT-2, GPT-Neo, and Llama2.

View More Papers

Sheep's Clothing, Wolf's Data: Detecting Server-Induced Client Vulnerabilities in...

Fangming Gu (Institute of Information Engineering, Chinese Academy of Sciences), Qingli Guo (Institute of Information Engineering, Chinese Academy of Sciences), Jie Lu (Institute of Computing Technology, Chinese Academy of Sciences), Qinghe Xie (Institute of Information Engineering, Chinese Academy of Sciences), Beibei Zhao (Institute of Information Engineering, Chinese Academy of Sciences), Kangjie Lu (University of Minnesota),…

Read More

Automatic Library Fuzzing through API Relation Evolvement

Jiayi Lin (The University of Hong Kong), Qingyu Zhang (The University of Hong Kong), Junzhe Li (The University of Hong Kong), Chenxin Sun (The University of Hong Kong), Hao Zhou (The Hong Kong Polytechnic University), Changhua Luo (The University of Hong Kong), Chenxiong Qian (The University of Hong Kong)

Read More

The Skeleton Keys: A Large Scale Analysis of Credential...

Yizhe Shi (Fudan University), Zhemin Yang (Fudan University), Kangwei Zhong (Fudan University), Guangliang Yang (Fudan University), Yifan Yang (Fudan University), Xiaohan Zhang (Fudan University), Min Yang (Fudan University)

Read More

The Road to Trust: Building Enclaves within Confidential VMs

Wenhao Wang (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Linke Song (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Benshan Mei (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Shuang Liu (Ant Group), Shijun Zhao (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering,…

Read More