Virat Shejwalkar (UMass Amherst), Amir Houmansadr (UMass Amherst)

Federated learning (FL) enables many data owners (e.g., mobile devices) to train a joint ML model (e.g., a next-word prediction classifier) without the need of sharing their private training data.

However, FL is known to be susceptible to poisoning attacks by malicious participants (e.g., adversary-owned mobile devices) who aim at hampering the accuracy of the jointly trained model through sending malicious inputs during the federated training process.

In this paper, we present a generic framework for model poisoning attacks on FL. We show that our framework leads to poisoning attacks that substantially outperform state-of-the-art model poisoning attacks by large margins. For instance, our attacks result in $1.5times$ to $60times$ higher reductions in the accuracy of FL models compared to previously discovered poisoning attacks.

Our work demonstrates that existing Byzantine-robust FL algorithms are significantly more susceptible to model poisoning than previously thought. Motivated by this, we design a defense against FL poisoning, called emph{divide-and-conquer} (DnC). We demonstrate that DnC outperforms all existing Byzantine-robust FL algorithms in defeating model poisoning attacks,
specifically, it is $2.5times$ to $12times$ more resilient in our experiments with different datasets and models.

View More Papers

Demo #3: Detecting Illicit Drone Video Filming Using Cryptanalysis

Ben Nassi, Raz Ben-Netanel (Ben-Gurion University of the Negev), Adi Shamir (Weizmann Institute of Science), and Yuval Elovic (Ben-Gurion University of the Negev)

Read More

Digital Technologies in Pandemic: The Good, the Bad and...

Moderator: Ahmad-Reza Sadeghi, TU Darmstadt, Germany Panelists: Mario Guglielmetti, Legal Officer, European Data Protection Supervisor* Jaap-Henk Hoepman, Radbaud University, The Netherlands Alexandra Dmitrienko, University of Würzburg, Germany, Farinaz Koushanfar, UCSD, USA *attending in his personal capacity

Read More

ALchemist: Fusing Application and Audit Logs for Precise Attack...

Le Yu (Purdue University), Shiqing Ma (Rutgers University), Zhuo Zhang (Purdue University), Guanhong Tao (Purdue University), Xiangyu Zhang (Purdue University), Dongyan Xu (Purdue University), Vincent E. Urias (Sandia National Laboratories), Han Wei Lin (Sandia National Laboratories), Gabriela Ciocarlie (SRI International), Vinod Yegneswaran (SRI International), Ashish Gehani (SRI International)

Read More

FARE: Enabling Fine-grained Attack Categorization under Low-quality Labeled Data

Junjie Liang (The Pennsylvania State University), Wenbo Guo (The Pennsylvania State University), Tongbo Luo (Robinhood), Vasant Honavar (The Pennsylvania State University), Gang Wang (University of Illinois at Urbana-Champaign), Xinyu Xing (The Pennsylvania State University)

Read More