Mu Yuan (The Chinese University of Hong Kong), Lan Zhang (University of Science and Technology of China), Yihang Cheng (University of Science and Technology of China), Miao-Hui Song (University of Science and Technology of China), Guoliang Xing (The Chinese University of Hong Kong), Xiang-Yang Li (University of Science and Technology of China)

The privacy of model parameters and user data is crucial for Transformer-based cloud services, such as online chatbots. While recent advances in secure multi-party computation and homomorphic encryption provide strong cryptographic guarantees, their computational overhead makes them infeasible for real-time inference with large-scale Transformer models.

In this work, we propose a practical alternative that balances privacy and efficiency in real-world deployments.
We introduce a three-party threat model involving a model developer, a cloud model server, and a data owner, capturing the trust assumptions and deployment conditions of practical AI services.
Within this framework, we design a semi-symmetric permutation-based protection mechanism and present STIP, the first three-party privacy-preserving inference system for large Transformers deployable on commodity hardware.
STIP formally bounds privacy leakage while preserving lossless inference accuracy.
To further safeguard model parameters, STIP integrates trusted execution environments to resist model extraction and fine-tuning attacks.

We evaluate STIP on six representative Transformer model families, including models with up to 70 billion parameters, under three deployment settings.
STIP's efficiency is comparable to unprotected full-cloud inference, for example, STIP achieves 31.7 ms latency on LLaMA2-7B model.
STIP also shows effective resistance to various attacks against user data and model parameters.
STIP has been deployed in a production environment on our proprietary 70B model.
In a three-month online test, STIP brings only 12% additional latency and no privacy incidents were reported, demonstrating its practicality and robustness for production-scale AI systems.

View More Papers

Formal Analysis of BLE Secure Connection Pairing and Revelation...

Min Shi (Wuhan University), Yongkang Xiao (Wuhan University), Jing Chen (Wuhan University), Kun He (Wuhan University), Ruiying Du (Wuhan University), Meng Jia (Department of Computing, the Hong Kong Polytechnic University)

Read More

When Cache Poisoning Meets LLM Systems: Semantic Cache Poisoning...

Guanlong Wu (Southern University of Science and Technology), Taojie Wang (Southern University of Science and Technology), Yao Zhang (ByteDance Inc.), Zheng Zhang (Southern University of Science and Technolog), Jianyu Niu (Southern University of Science and Technology), Ye Wu (ByteDance Inc.), Yinqian Zhang (SUSTech)

Read More

Token Time Bomb: Evaluating JWT Implementations for Vulnerability Discovery

Jingcheng Yang (Tsinghua University), Enze Wang (National University of Defense Technology & Tsinghua University), Jianjun Chen (Tsinghua University), Qi Wang (Tsinghua University), Yuheng Zhang (Tsinghua University), Haixin Duan (Quancheng Lab,Tsinghua University), Wei Xie (College of Computer, National University of Defense Technology), Baosheng Wang (National University of Defense Technology)

Read More