Xuanqi Liu (Tsinghua University), Zhuotao Liu (Tsinghua University), Qi Li (Tsinghua University), Ke Xu (Tsinghua University), Mingwei Xu (Tsinghua University)

The escalating focus on data privacy poses significant challenges for collaborative neural network training, where data ownership and model training/deployment responsibilities reside with distinct entities. Our community has made substantial contributions to addressing this challenge, proposing various approaches such as federated learning (FL) and privacy-preserving machine learning based on cryptographic constructs like homomorphic encryption (HE) and secure multiparty computation (MPC). However, FL completely overlooks model privacy, and HE has limited extensibility (confined to only one data provider). While the state-of-the-art MPC frameworks provide reasonable throughput and simultaneously ensure model/data privacy, they rely on a critical non-colluding assumption on the computing servers, and relaxing this assumption is still an open problem.

In this paper, we present Pencil, the first private training framework for collaborative learning that simultaneously offers data privacy, model privacy, and extensibility to multiple data providers, without relying on the non-colluding assumption. Our fundamental design principle is to construct the n-party collaborative training protocol based on an efficient two-party protocol, and meanwhile ensuring that switching to different data providers during model training introduces no extra cost. We introduce several novel cryptographic protocols to realize this design principle and conduct a rigorous security and privacy analysis. Our comprehensive evaluations of Pencil demonstrate that (i) models trained in plaintext and models trained privately using Pencil exhibit nearly identical test accuracies; (ii) The training overhead of Pencil is greatly reduced: Pencil achieves 10 ∼ 260× higher throughput and 2 orders of magnitude less communication than prior art; (iii) Pencil is resilient against both existing and adaptive (white-box) attacks.

View More Papers

Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural...

Gorka Abad (Radboud University & Ikerlan Technology Research Centre), Oguzhan Ersoy (Radboud University), Stjepan Picek (Radboud University & Delft University of Technology), Aitor Urbieta (Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA))

Read More

Reverse Engineering of Multiplexed CAN Frames (Long)

Alessio Buscemi, Thomas Engel (SnT, University of Luxembourg), Kang G. Shin (The University of Michigan)

Read More

Commercial Vehicle Electronic Logging Device Security: Unmasking the Risk...

Jake Jepson, Rik Chatterjee, Jeremy Daily (Colorado State University)

Read More

Random Spoofing Attack against Scan Matching Algorithm SLAM (Long)

Masashi Fukunaga (MitsubishiElectric), Takeshi Sugawara (The University of Electro-Communications)

Read More