Xinqian Wang (RMIT University), Xiaoning Liu (RMIT University), Shangqi Lai (CSIRO Data61), Xun Yi (RMIT University), Xingliang Yuan (University of Melbourne)

Secure inference is designed to enable encrypted machine learning model prediction over encrypted data. It will ease privacy concerns when models are deployed in Machine Learning as a Service (MLaaS). For efficiency, most of recent secure inference protocols are constructed using secure multi-party computation (MPC) techniques. They can ensure that MLaaS computes inference without knowing the inputs of users and model owners. However, MPC-based protocols do not hide information revealed from their output. In the context of secure inference, prediction outputs (i.e., inference results of encrypted user inputs) are revealed to the users. As a result, adversaries can compromise textbf{output privacy} of secure inference, i.e., launching Membership Inference Attacks (MIAs) by querying encrypted models, just like MIAs in plaintext inference.

We observe that MPC-based secure inference often yields perturbed predictions due to approximations of nonlinear functions like softmax compared to its plaintext version on identical user inputs. Thus, we evaluate whether or not MIAs can still exploit such perturbed predictions on known secure inference protocols. Our results show that secure inference remains vulnerable to MIAs. The adversary can steal membership information with high successful rates comparable to plaintext MIAs.

To tackle this open challenge, we propose textbf{SIGuard}, a framework to guard the output privacy of secure inference from being exploited by MIAs. textbf{SIGuard}'s protocol can seamlessly be integrated into existing MPC-based secure inference protocols without intruding on their computation. It proceeds with encrypted predictions outputted from secure inference, and then crafts noise for perturbing encrypted predictions without compromising inference accuracy; only the perturbed predictions are revealed to users at the end of protocol execution. textbf{SIGuard} achieves stringent privacy guarantees via a co-design of MPC techniques and machine learning. We further conduct comprehensive evaluations to find the optimal hyper-parameters for balanced efficiency and defense effectiveness against MIAs. Together, our evaluation shows textbf{SIGuard} effectively defends against MIAs by reducing the attack accuracy to be around the random guess with overhead (1.1s), occupying ~24.8% of secure inference (3.29s) on widely used ResNet34 over CIFAR-10.

View More Papers

PBP: Post-training Backdoor Purification for Malware Classifiers

Dung Thuy Nguyen (Vanderbilt University), Ngoc N. Tran (Vanderbilt University), Taylor T. Johnson (Vanderbilt University), Kevin Leach (Vanderbilt University)

Read More

Diffence: Fencing Membership Privacy With Diffusion Models

Yuefeng Peng (University of Massachusetts Amherst), Ali Naseh (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

Read More

Provably Unlearnable Data Examples

Derui Wang (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Bo Li (The University of Chicago), Seyit Camtepe (CSIRO's Data61), Liming Zhu (CSIRO's Data61)

Read More

The (Un)usual Suspects – Studying Reasons for Lacking Updates...

Maria Hellenthal (CISPA Helmholtz Center for Information Security), Lena Gotsche (CISPA Helmholtz Center for Information Security), Rafael Mrowczynski (CISPA Helmholtz Center for Information Security), Sarah Kugel (Saarland University), Michael Schilling (CISPA Helmholtz Center for Information Security), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More