Xiaoyu Cao (Duke University), Minghong Fang (The Ohio State University), Jia Liu (The Ohio State University), Neil Zhenqiang Gong (Duke University)

Byzantine-robust federated learning aims to enable a service provider to learn an accurate global model when a bounded number of clients are malicious. The key idea of existing Byzantine-robust federated learning methods is that the service provider performs statistical analysis among the clients' local model updates and removes suspicious ones, before aggregating them to update the global model. However, malicious clients can still corrupt the global models in these methods via sending carefully crafted local model updates to the service provider. The fundamental reason is that there is no root of trust in existing federated learning methods, i.e., from the service provider's perspective, every client could be malicious.

In this work, we bridge the gap via proposing emph{FLTrust}, a new federated learning method in which the service provider itself bootstraps trust. In particular, the service provider itself collects a clean small training dataset (called emph{root dataset}) for the learning task and the service provider maintains a model (called emph{server model}) based on it to bootstrap trust. In each iteration, the service provider first assigns a trust score to each local model update from the clients, where a local model update has a lower trust score if its direction deviates more from the direction of the server model update. Then, the service provider normalizes the magnitudes of the local model updates such that they lie in the same hyper-sphere as the server model update in the vector space. Our normalization limits the impact of malicious local model updates with large magnitudes. Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model. Our extensive evaluations on six datasets from different domains show that our FLTrust is secure against both existing attacks and strong adaptive attacks. For instance, using a root dataset with less than 100 examples, FLTrust under adaptive attacks with 40%-60% of malicious clients can still train global models that are as accurate as the global models trained by FedAvg under no attacks, where FedAvg is a popular method in non-adversarial settings.

View More Papers

Safer Illinois and RokWall: Privacy Preserving University Health Apps...

Vikram Sharma Mailthody, James Wei, Nicholas Chen, Mohammad Behnia, Ruihao Yao, Qihao Wang, Vedant Agarwal, Churan He, Lijian Wang, Leihao Chen, Amit Agarwal, Edward Richter, Wen-mei Hwu, and Christopher Fletcher (University of Illinois at Urbana-Champaign); Jinjun Xiong (IBM); Andrew Miller and Sanjay Patel (University of Illinois at Urbana-Champaign)

Read More

NetPlier: Probabilistic Network Protocol Reverse Engineering from Message Traces

Yapeng Ye (Purdue University), Zhuo Zhang (Purdue University), Fei Wang (Purdue University), Xiangyu Zhang (Purdue University), Dongyan Xu (Purdue University)

Read More

Effects of Precise and Imprecise Value-Set Analysis (VSA) Information...

Laura Matzen, Michelle A Leger, Geoffrey Reedy (Sandia National Laboratories)

Read More

PyPANDA: Taming the PANDAmonium of Whole System Dynamic Analysis

Luke Craig, Tim Leek (MIT Lincoln Laboratory), Andrew Fasano, Tiemoko Ballo (MIT Lincoln Laboratory, Northeastern University), Brendan Dolan-Gavitt (New York University), William Robertson (Northeastern University)

Read More