Mohammad Naseri (University College London), Jamie Hayes (DeepMind), Emiliano De Cristofaro (University College London & Alan Turing Institute)

Federated Learning (FL) allows multiple participants to train machine learning models collaboratively by keeping their datasets local while only exchanging model updates. Alas, this is not necessarily free from privacy and robustness vulnerabilities, e.g., via membership, property, and backdoor attacks. This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL. To this end, we present a first-of-its-kind evaluation of Local and Central Differential Privacy (LDP/CDP) techniques in FL, assessing their feasibility and effectiveness.

Our experiments show that both DP variants do defend against backdoor attacks, albeit with varying levels of protection-utility trade-offs, but anyway more effectively than other robustness defenses. DP also mitigates white-box membership inference attacks in FL, and our work is the first to show it empirically. Neither LDP nor CDP, however, defend against property inference. Overall, our work provides a comprehensive, re-usable measurement methodology to quantify the trade-offs between robustness/privacy and utility in differentially private FL.

View More Papers

What the Fork? Finding and Analyzing Malware in GitHub...

Alan Cao (New York University) and Brendan Dolan-Gavitt (New York University)

Read More

Generating Test Suites for GPU Instruction Sets through Mutation...

Shoham Shitrit(University of Rochester) and Sreepathi Pai (University of Rochester)

Read More

An In-depth Analysis of Duplicated Linux Kernel Bug Reports

Dongliang Mu (Huazhong University of Science and Technology), Yuhang Wu (Pennsylvania State University), Yueqi Chen (Pennsylvania State University), Zhenpeng Lin (Pennsylvania State University), Chensheng Yu (George Washington University), Xinyu Xing (Pennsylvania State University), Gang Wang (University of Illinois at Urbana-Champaign)

Read More

Demo #15: Remote Adversarial Attack on Automated Lane Centering

Yulong Cao (University of Michigan), Yanan Guo (University of Pittsburgh), Takami Sato (UC Irvine), Qi Alfred Chen (UC Irvine), Z. Morley Mao (University of Michigan) and Yueqiang Cheng (NIO)

Read More