Tianyue Chu, Devriş İşler (IMDEA Networks Institute & Universidad Carlos III de Madrid), Nikolaos Laoutaris (IMDEA Networks Institute)

Federated Learning (FL) has evolved into a pivotal paradigm for collaborative machine learning, enabling a centralised server to compute a global model by aggregating the local models trained by clients. However, the distributed nature of FL renders it susceptible to poisoning attacks that exploit its linear aggregation rule called FEDAVG. To address this vulnerability, FEDQV has been recently introduced as a superior alternative to FEDAVG, specifically designed to mitigate poisoning attacks by taxing more than linearly deviating clients. Nevertheless, FEDQV remains exposed to privacy attacks that aim to infer private information from clients’ local models. To counteract such privacy threats, a well-known approach is to use a Secure Aggregation (SA) protocol to ensure that the server is unable to inspect individual trained models as it aggregates them. In this work, we show how to implement SA on top of FEDQV in order to address both poisoning and privacy attacks. We mount several privacy attacks against FEDQV and demonstrate the effectiveness of SA in countering them.

View More Papers

Securing Automotive Software Supply Chains (Long)

Marina Moore, Aditya Sirish A Yelgundhalli (New York University), Justin Cappos (NYU)

Read More

Decentralized Information-Flow Control for ROS2

Nishit V. Pandya (Indian Institute of Science Bangalore), Himanshu Kumar (Indian Institute of Science Bangalore), Gokulnath M. Pillai (Indian Institute of Science Bangalore), Vinod Ganapathy (Indian Institute of Science Bangalore)

Read More

UntrustIDE: Exploiting Weaknesses in VS Code Extensions

Elizabeth Lin (North Carolina State University), Igibek Koishybayev (North Carolina State University), Trevor Dunlap (North Carolina State University), William Enck (North Carolina State University), Alexandros Kapravelos (North Carolina State University)

Read More