Minghong Fang (University of Louisville), Seyedsina Nabavirazavi (Florida International University), Zhuqing Liu (University of North Texas), Wei Sun (Wichita State University), Sundararaja Iyengar (Florida International University), Haibo Yang (Rochester Institute of Technology)

Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model through a server, without exchanging their private training data. However, the decentralized aspect of FL makes it susceptible to poisoning attacks, where malicious clients can manipulate the global model by sending altered local model updates. To counter these attacks, a variety of aggregation rules designed to be resilient to Byzantine failures have been introduced. Nonetheless, these methods can still be vulnerable to sophisticated attacks or depend on unrealistic assumptions about the server. In this paper, we demonstrate that there is no need to design new Byzantine-robust aggregation rules; instead, FL can be secured by enhancing the robustness of well-established aggregation rules. To this end, we present FoundationFL, a novel defense mechanism against poisoning attacks. FoundationFL involves the server generating synthetic updates after receiving local model updates from clients. It then applies existing Byzantine-robust foundational aggregation rules, such as Trimmed-mean or Median, to combine clients' model updates with the synthetic ones. We theoretically establish the convergence performance of FoundationFL under Byzantine settings. Comprehensive experiments across several real-world datasets validate the efficiency of our FoundationFL method.

View More Papers

AI-Assisted RF Fingerprinting for Identification of User Devices in...

Aishwarya Jawne (Center for Connected Autonomy & AI, Florida Atlantic University), Georgios Sklivanitis (Center for Connected Autonomy & AI, Florida Atlantic University), Dimitris A. Pados (Center for Connected Autonomy & AI, Florida Atlantic University), Elizabeth Serena Bentley (Air Force Research Laboratory)

Read More

On the Realism of LiDAR Spoofing Attacks against Autonomous...

Takami Sato (University of California, Irvine), Ryo Suzuki (Keio University), Yuki Hayakawa (Keio University), Kazuma Ikeda (Keio University), Ozora Sako (Keio University), Rokuto Nagata (Keio University), Ryo Yoshida (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

Read More

AegisSat: A Satellite Cybersecurity Testbed

Roee Idan, Roy Peled, Aviel Ben Siman Tov, Eli Markus, Boris Zadov, Ofir Chodeda, Yohai Fadida (Ben Gurion University of the Negev), Oliver Holschke, Jan Plachy (T-Labs (Research & Innovation)), Yuval Elovici, Asaf Shabtai (Ben Gurion University of the Negev)

Read More

DiStefano: Decentralized Infrastructure for Sharing Trusted Encrypted Facts and...

Sofia Celi (Brave Software), Alex Davidson (NOVA LINCS & Universidade NOVA de Lisboa), Hamed Haddadi (Imperial College London & Brave Software), Gonçalo Pestana (Hashmatter), Joe Rowell (Information Security Group, Royal Holloway, University of London)

Read More