Harsh Chaudhari (Indian Institute of Science, Bangalore), Rahul Rachuri (Aarhus University, Denmark), Ajith Suresh (Indian Institute of Science, Bangalore)

Machine learning has started to be deployed in fields such as healthcare and finance, which involves dealing with a lot of sensitive data. This propelled the need for and growth of privacy-preserving machine learning. We propose an efficient four-party protocol (4PC) that outperforms the state-of-the-art of Gordon et al. (ASIACRYPT 2018) and showcase its applications on three of the most widely-known machine learning algorithms -- Linear Regression, Logistic Regression, and Neural Networks.

We propose an efficient mixed-world framework (Trident) in the offline-online paradigm to switch between the Arithmetic, Boolean, and Garbled worlds. Our framework operates in 4PC honest majority setting over rings and is instantiated in a server-aided setting for machine learning, where the data is secretly shared among the servers. In addition, we propose conversions especially relevant to privacy-preserving machine learning. We outperform the current state-of-the-art ABY3 (for three parties), in terms of both rounds as well as communication complexity.

The highlights of our framework include using a minimal number of expensive circuits overall as compared to ABY3. This can be seen in our technique for truncation, which does not affect the online cost of multiplication and removes the need for any circuits in the offline phase. Our B2A conversion has an improvement of $mathbf{7} times$ in rounds and $mathbf{18} times$ in the communication complexity. In addition to these, all of the special conversions for machine learning, for eg. Secure Comparison, achieve constant round complexity. These massive improvements are primarily due to the advantage of having an additional third honest party available in our setting.

The practicality of our framework is argued through improvements in the benchmarking of the aforementioned algorithms when compared with ABY3. All the protocols are implemented over a 64-bit ring in both LAN and WAN setting. Our improvements go up to $mathbf{187} times$ for the training phase and $mathbf{158} times$ for the prediction phase, considering LAN and WAN together.

View More Papers

HotFuzz: Discovering Algorithmic Denial-of-Service Vulnerabilities Through Guided Micro-Fuzzing

William Blair (Boston University), Andrea Mambretti (Northeastern University), Sajjad Arshad (Northeastern University), Michael Weissbacher (Northeastern University), William Robertson (Northeastern University), Engin Kirda (Northeastern University), Manuel Egele (Boston University)

Read More

Dynamic Searchable Encryption with Small Client Storage

Ioannis Demertzis (University of Maryland), Javad Ghareh Chamani (Hong Kong University of Science and Technology & Sharif University of Technology), Dimitrios Papadopoulos (Hong Kong University of Science and Technology), Charalampos Papamanthou (University of Maryland)

Read More

Encrypted DNS –> Privacy? A Traffic Analysis Perspective

Sandra Siby (EPFL), Marc Juarez (University of Southern California), Claudia Diaz (imec-COSIC KU Leuven), Narseo Vallina-Rodriguez (IMDEA Networks Institute), Carmela Troncoso (EPFL)

Read More

BLAZE: Blazing Fast Privacy-Preserving Machine Learning

Arpita Patra (Indian Institute of Science, Bangalore), Ajith Suresh (Indian Institute of Science, Bangalore)

Read More