Minghong Fang (University of Louisville), Seyedsina Nabavirazavi (Florida International University), Zhuqing Liu (University of North Texas), Wei Sun (Wichita State University), Sundararaja Iyengar (Florida International University), Haibo Yang (Rochester Institute of Technology)

Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model through a server, without exchanging their private training data. However, the decentralized aspect of FL makes it susceptible to poisoning attacks, where malicious clients can manipulate the global model by sending altered local model updates. To counter these attacks, a variety of aggregation rules designed to be resilient to Byzantine failures have been introduced. Nonetheless, these methods can still be vulnerable to sophisticated attacks or depend on unrealistic assumptions about the server. In this paper, we demonstrate that there is no need to design new Byzantine-robust aggregation rules; instead, FL can be secured by enhancing the robustness of well-established aggregation rules. To this end, we present FoundationFL, a novel defense mechanism against poisoning attacks. FoundationFL involves the server generating synthetic updates after receiving local model updates from clients. It then applies existing Byzantine-robust foundational aggregation rules, such as Trimmed-mean or Median, to combine clients' model updates with the synthetic ones. We theoretically establish the convergence performance of FoundationFL under Byzantine settings. Comprehensive experiments across several real-world datasets validate the efficiency of our FoundationFL method.

View More Papers

Spatial-Domain Wireless Jamming with Reconfigurable Intelligent Surfaces

Philipp Mackensen (Ruhr University Bochum), Paul Staat (Max Planck Institute for Security and Privacy), Stefan Roth (Ruhr University Bochum), Aydin Sezgin (Ruhr University Bochum), Christof Paar (Max Planck Institute for Security and Privacy), Veelasha Moonsamy (Ruhr University Bochum)

Read More

ERW-Radar: An Adaptive Detection System against Evasive Ransomware by...

Lingbo Zhao (Institute of Information Engineering, Chinese Academy of Sciences), Yuhui Zhang (Institute of Information Engineering, Chinese Academy of Sciences), Zhilu Wang (Institute of Information Engineering, Chinese Academy of Sciences), Fengkai Yuan (Institute of Information Engineering, CAS), Rui Hou (Institute of Information Engineering, Chinese Academy of Sciences)

Read More

The Midas Touch: Triggering the Capability of LLMs for...

Yi Yang (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Jinghua Liu (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Kai Chen (Institute of Information Engineering, Chinese Academy of…

Read More

Onion Franking: Abuse Reports for Mix-Based Private Messaging

Matthew Gregoire (University of North Carolina at Chapel Hill), Margaret Pierce (University of North Carolina at Chapel Hill), Saba Eskandarian (University of North Carolina at Chapel Hill)

Read More