Hossein Fereidooni (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Phillip Rieger (Technical University of Darmstadt), Alexandra Dmitrienko (University of Wuerzburg), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Federated learning (FL) is a collaborative learning paradigm allowing multiple clients to jointly train a model without sharing their training data. However, FL is susceptible to poisoning attacks, in which the adversary injects manipulated model updates into the federated model aggregation process to corrupt or destroy predictions (untargeted poisoning) or implant hidden functionalities (targeted poisoning or backdoors). Existing defenses against poisoning attacks in FL have several limitations, such as relying on specific assumptions about attack types and strategies or data distributions or not sufficiently robust against advanced injection techniques and strategies and simultaneously maintaining the utility of the aggregated model.

To address the deficiencies of existing defenses, we take a generic and completely different approach to detect poisoning (targeted and untargeted) attacks. We present FreqFed, a novel aggregation mechanism that transforms the model updates (i.e., weights) into the frequency domain, where we can identify the core frequency components that inherit sufficient information about weights. This allows us to effectively filter out malicious updates during local training on the clients, regardless of attack types, strategies, and clients' data distributions. We extensively evaluate the efficiency and effectiveness of FreqFed in different application domains, including image classification, word prediction, IoT intrusion detection, and speech recognition. We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.

View More Papers

Connecting the Dots in the Sky: Website Fingerprinting in...

Prabhjot Singh (University of Waterloo), Diogo Barradas (University of Waterloo), Tariq Elahi (University of Edinburgh), Noura Limam (University of Waterloo)

Read More

LMSanitator: Defending Prompt-Tuning Against Task-Agnostic Backdoors

Chengkun Wei (Zhejiang University), Wenlong Meng (Zhejiang University), Zhikun Zhang (CISPA Helmholtz Center for Information Security and Stanford University), Min Chen (CISPA Helmholtz Center for Information Security), Minghu Zhao (Zhejiang University), Wenjing Fang (Ant Group), Lei Wang (Ant Group), Zihui Zhang (Zhejiang University), Wenzhi Chen (Zhejiang University)

Read More

Analysis of the Effect of the Difference between Japanese...

Rei Yamagishi, Shinya Sasa, and Shota Fujii (Hitachi, Ltd.)

Read More

LDR: Secure and Efficient Linux Driver Runtime for Embedded...

Huaiyu Yan (Southeast University), Zhen Ling (Southeast University), Haobo Li (Southeast University), Lan Luo (Anhui University of Technology), Xinhui Shao (Southeast University), Kai Dong (Southeast University), Ping Jiang (Southeast University), Ming Yang (Southeast University), Junzhou Luo (Southeast University, Nanjing, P.R. China), Xinwen Fu (University of Massachusetts Lowell)

Read More