Virat Shejwalkar (UMass Amherst), Amir Houmansadr (UMass Amherst)

Federated learning (FL) enables many data owners (e.g., mobile devices) to train a joint ML model (e.g., a next-word prediction classifier) without the need of sharing their private training data.

However, FL is known to be susceptible to poisoning attacks by malicious participants (e.g., adversary-owned mobile devices) who aim at hampering the accuracy of the jointly trained model through sending malicious inputs during the federated training process.

In this paper, we present a generic framework for model poisoning attacks on FL. We show that our framework leads to poisoning attacks that substantially outperform state-of-the-art model poisoning attacks by large margins. For instance, our attacks result in $1.5times$ to $60times$ higher reductions in the accuracy of FL models compared to previously discovered poisoning attacks.

Our work demonstrates that existing Byzantine-robust FL algorithms are significantly more susceptible to model poisoning than previously thought. Motivated by this, we design a defense against FL poisoning, called emph{divide-and-conquer} (DnC). We demonstrate that DnC outperforms all existing Byzantine-robust FL algorithms in defeating model poisoning attacks,
specifically, it is $2.5times$ to $12times$ more resilient in our experiments with different datasets and models.

View More Papers

Emilia: Catching Iago in Legacy Code

Rongzhen Cui (University of Toronto), Lianying Zhao (Carleton University), David Lie (University of Toronto)

Read More

Taking a Closer Look at the Alexa Skill Ecosystem

Christopher Lentzsch (Ruhr-Universität Bochum), Anupam Das (North Carolina State University)

Read More

Bringing Balance to the Force: Dynamic Analysis of the...

Abdallah Dawoud (CISPA Helmholtz Center for Information Security), Sven Bugiel (CISPA Helmholtz Center for Information Security)

Read More

The Nuts and Bolts of Building FlowLens

Diogo Barradas (Instituto Superior Técnico, Universidade de Lisboa)

Read More