Mu Yuan (The Chinese University of Hong Kong), Lan Zhang (University of Science and Technology of China), Yihang Cheng (University of Science and Technology of China), Miao-Hui Song (University of Science and Technology of China), Guoliang Xing (The Chinese University of Hong Kong), Xiang-Yang Li (University of Science and Technology of China)

The privacy of model parameters and user data is crucial for Transformer-based cloud services, such as online chatbots. While recent advances in secure multi-party computation and homomorphic encryption provide strong cryptographic guarantees, their computational overhead makes them infeasible for real-time inference with large-scale Transformer models.

In this work, we propose a practical alternative that balances privacy and efficiency in real-world deployments.
We introduce a three-party threat model involving a model developer, a cloud model server, and a data owner, capturing the trust assumptions and deployment conditions of practical AI services.
Within this framework, we design a semi-symmetric permutation-based protection mechanism and present STIP, the first three-party privacy-preserving inference system for large Transformers deployable on commodity hardware.
STIP formally bounds privacy leakage while preserving lossless inference accuracy.
To further safeguard model parameters, STIP integrates trusted execution environments to resist model extraction and fine-tuning attacks.

We evaluate STIP on six representative Transformer model families, including models with up to 70 billion parameters, under three deployment settings.
STIP's efficiency is comparable to unprotected full-cloud inference, for example, STIP achieves 31.7 ms latency on LLaMA2-7B model.
STIP also shows effective resistance to various attacks against user data and model parameters.
STIP has been deployed in a production environment on our proprietary 70B model.
In a three-month online test, STIP brings only 12% additional latency and no privacy incidents were reported, demonstrating its practicality and robustness for production-scale AI systems.

View More Papers

PIRANHAS: PrIvacy-Preserving Remote Attestation in Non-Hierarchical Asynchronous Swarms

Jonas Hofmann (Technical University of Darmstadt), Philipp-Florens Lehwalder (Technical University of Darmstadt), Shahriar Ebrahimi (Alan Turing Institute), Parisa Hassanizadeh (IPPT PAN / University of Warwick), Sebastian Faust (Technical University of Darmstadt)

Read More

What Are Brands Telling You About Smishing? A Cross-Industry...

Dev Vikesh Doshi (California State University San Marcos), Mehjabeen Tasnim (California State University San Marcos), Fernando Landeros (California State University San Marcos), Chinthagumpala Muni Venkatesh (California State University San Marcos), Daniel Timko (Emerging Threats Lab / Smishtank.com), Muhammad Lutfor Rahman (California State University San Marcos)

Read More

SNPeek: Side-Channel Analysis for Privacy Applications on Confidential VMs

Ruiyi Zhang (CISPA Helmholtz Center for Information Security and Google), Albert Cheu (Google), Adria Gascon (Google), Daniel Moghimi (Google), Phillipp Schoppmann (Google), Michael Schwarz (CISPA Helmholtz Center for Information Security), Octavian Suciu (Google)

Read More