Virat Shejwalkar (UMass Amherst), Amir Houmansadr (UMass Amherst)

Federated learning (FL) enables many data owners (e.g., mobile devices) to train a joint ML model (e.g., a next-word prediction classifier) without the need of sharing their private training data.

However, FL is known to be susceptible to poisoning attacks by malicious participants (e.g., adversary-owned mobile devices) who aim at hampering the accuracy of the jointly trained model through sending malicious inputs during the federated training process.

In this paper, we present a generic framework for model poisoning attacks on FL. We show that our framework leads to poisoning attacks that substantially outperform state-of-the-art model poisoning attacks by large margins. For instance, our attacks result in $1.5times$ to $60times$ higher reductions in the accuracy of FL models compared to previously discovered poisoning attacks.

Our work demonstrates that existing Byzantine-robust FL algorithms are significantly more susceptible to model poisoning than previously thought. Motivated by this, we design a defense against FL poisoning, called emph{divide-and-conquer} (DnC). We demonstrate that DnC outperforms all existing Byzantine-robust FL algorithms in defeating model poisoning attacks,
specifically, it is $2.5times$ to $12times$ more resilient in our experiments with different datasets and models.

View More Papers

From WHOIS to WHOWAS: A Large-Scale Measurement Study of...

Chaoyi Lu (Tsinghua University; Beijing National Research Center for Information Science and Technology), Baojun Liu (Tsinghua University; Beijing National Research Center for Information Science and Technology; Qi An Xin Group), Yiming Zhang (Tsinghua University; Beijing National Research Center for Information Science and Technology), Zhou Li (University of California, Irvine), Fenglu Zhang (Tsinghua University), Haixin Duan…

Read More

Impact Evaluation of Falsified Data Attacks on Connected Vehicle...

Shihong Huang (University of Michigan, Ann Arbor), Yiheng Feng (Purdue University), Wai Wong (University of Michigan, Ann Arbor), Qi Alfred Chen (UC Irvine), Z. Morley Mao and Henry X. Liu (University of Michigan, Ann Arbor) Best Paper Award Runner-up ($200 cash prize)!

Read More

Reining in the Web's Inconsistencies with Site Policy

Stefano Calzavara (Università Ca' Foscari Venezia), Tobias Urban (Institute for Internet Security and Ruhr University Bochum), Dennis Tatang (Ruhr University Bochum), Marius Steffens (CISPA Helmholtz Center for Information Security), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More

RandRunner: Distributed Randomness from Trapdoor VDFs with Strong Uniqueness

Philipp Schindler (SBA Research), Aljosha Judmayer (SBA Research), Markus Hittmeir (SBA Research), Nicholas Stifter (SBA Research, TU Wien), Edgar Weippl (Universität Wien)

Read More