Sinem Sav (EPFL), Apostolos Pyrgelis (EPFL), Juan Ramón Troncoso-Pastoriza (EPFL), David Froelicher (EPFL), Jean-Philippe Bossuat (EPFL), Joao Sa Sousa (EPFL), Jean-Pierre Hubaux (EPFL)

In this paper, we address the problem of privacy-preserving training and evaluation of neural networks in an $N$-party, federated learning setting. We propose a novel system, POSEIDON, the first of its kind in the regime of privacy-preserving neural network training. It employs multiparty lattice-based cryptography to preserve the confidentiality of the training data, the model, and the evaluation data, under a passive-adversary model and collusions between up to $N-1$ parties. To efficiently execute the secure backpropagation algorithm for training neural networks, we provide a generic packing approach that enables Single Instruction, Multiple Data (SIMD) operations on encrypted data. We also introduce arbitrary linear transformations within the cryptographic bootstrapping operation, optimizing the costly cryptographic computations over the parties, and we define a constrained optimization problem for choosing the cryptographic parameters. Our experimental results show that POSEIDON achieves accuracy similar to centralized or decentralized non-private approaches and that its computation and communication overhead scales linearly with the number of parties. POSEIDON trains a 3-layer neural network on the MNIST dataset with 784 features and 60K samples distributed among 10 parties in less than 2 hours.

View More Papers

Practical Non-Interactive Searchable Encryption with Forward and Backward Privacy

Shi-Feng Sun (Monash University, Australia), Ron Steinfeld (Monash University, Australia), Shangqi Lai (Monash University, Australia), Xingliang Yuan (Monash University, Australia), Amin Sakzad (Monash University, Australia), Joseph Liu (Monash University, Australia), ‪Surya Nepal‬ (Data61, CSIRO, Australia), Dawu Gu (Shanghai Jiao Tong University, China)

Read More

QPEP: An Actionable Approach to Secure and Performant Broadband...

James Pavur (Oxford University), Martin Strohmeier (armasuisse), Vincent Lenders (armasuisse), Ivan Martinovic (Oxford University)

Read More

Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses...

Virat Shejwalkar (UMass Amherst), Amir Houmansadr (UMass Amherst)

Read More

Demo #4: Attacking Tesla Model X’s Autopilot Using Compromised...

Ben Nassi (Ben-Gurion University of the Negev), Yisroel Mirsky (Ben-Gurion University of the Negev, Georgia Tech), Dudi Nassi, Raz Ben Netanel (Ben-Gurion University of the Negev), Oleg Drokin (Independent Researcher), and Yuval Elovici (Ben-Gurion University of the Negev) Best Demo Award Winner ($300 cash prize)!

Read More