Mu Yuan (The Chinese University of Hong Kong), Lan Zhang (University of Science and Technology of China), Yihang Cheng (University of Science and Technology of China), Miao-Hui Song (University of Science and Technology of China), Guoliang Xing (The Chinese University of Hong Kong), Xiang-Yang Li (University of Science and Technology of China)

The privacy of model parameters and user data is crucial for Transformer-based cloud services, such as online chatbots. While recent advances in secure multi-party computation and homomorphic encryption provide strong cryptographic guarantees, their computational overhead makes them infeasible for real-time inference with large-scale Transformer models.

In this work, we propose a practical alternative that balances privacy and efficiency in real-world deployments.
We introduce a three-party threat model involving a model developer, a cloud model server, and a data owner, capturing the trust assumptions and deployment conditions of practical AI services.
Within this framework, we design a semi-symmetric permutation-based protection mechanism and present STIP, the first three-party privacy-preserving inference system for large Transformers deployable on commodity hardware.
STIP formally bounds privacy leakage while preserving lossless inference accuracy.
To further safeguard model parameters, STIP integrates trusted execution environments to resist model extraction and fine-tuning attacks.

We evaluate STIP on six representative Transformer model families, including models with up to 70 billion parameters, under three deployment settings.
STIP's efficiency is comparable to unprotected full-cloud inference, for example, STIP achieves 31.7 ms latency on LLaMA2-7B model.
STIP also shows effective resistance to various attacks against user data and model parameters.
STIP has been deployed in a production environment on our proprietary 70B model.
In a three-month online test, STIP brings only 12% additional latency and no privacy incidents were reported, demonstrating its practicality and robustness for production-scale AI systems.

View More Papers

TYPEFUZZ: Type Coverage Directed JavaScript Engine Fuzzing (Registered Report)

Tobias Wienand (Ruhr-Universitat Bochum), Lukas Bernhard (Ruhr-Universitat Bochum), Flavio Toffalini (Ruhr-Universitat Bochum)

Read More

The Heat is On: Understanding and Mitigating Vulnerabilities of...

Sri Hrushikesh Varma Bhupathiraju (University of Florida), Shaoyuan Xie (University of California, Irvine), Michael Clifford (Toyota InfoTech Labs), Qi Alfred Chen (University of California, Irvine), Takeshi Sugawara (The University of Electro-Communications), Sara Rampazzi (University of Florida)

Read More

BKPIR: Keyword PIR for Private Boolean Retrieval

Jie Song (Institute of Information Engineering, Chinese Academy of Sciences; Intelligent Policing Key Laboratory of Sichuan Province, Sichuan Police College; School of Cyber Security, University of Chinese Academy of Sciences), Zhen Xu (Institute of Information Engineering, Chinese Academy of Sciences), Yan Zhang (Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University…

Read More