Tianyue Chu, Devriş İşler (IMDEA Networks Institute & Universidad Carlos III de Madrid), Nikolaos Laoutaris (IMDEA Networks Institute)

Federated Learning (FL) has evolved into a pivotal paradigm for collaborative machine learning, enabling a centralised server to compute a global model by aggregating the local models trained by clients. However, the distributed nature of FL renders it susceptible to poisoning attacks that exploit its linear aggregation rule called FEDAVG. To address this vulnerability, FEDQV has been recently introduced as a superior alternative to FEDAVG, specifically designed to mitigate poisoning attacks by taxing more than linearly deviating clients. Nevertheless, FEDQV remains exposed to privacy attacks that aim to infer private information from clients’ local models. To counteract such privacy threats, a well-known approach is to use a Secure Aggregation (SA) protocol to ensure that the server is unable to inspect individual trained models as it aggregates them. In this work, we show how to implement SA on top of FEDQV in order to address both poisoning and privacy attacks. We mount several privacy attacks against FEDQV and demonstrate the effectiveness of SA in countering them.

View More Papers

Securing the Satellite Software Stack

Samuel Jero (MIT Lincoln Laboratory), Juliana Furgala (MIT Lincoln Laboratory), Max A Heller (MIT Lincoln Laboratory), Benjamin Nahill (MIT Lincoln Laboratory), Samuel Mergendahl (MIT Lincoln Laboratory), Richard Skowyra (MIT Lincoln Laboratory)

Read More

MASTERKEY: Automated Jailbreaking of Large Language Model Chatbots

Gelei Deng (Nanyang Technological University), Yi Liu (Nanyang Technological University), Yuekang Li (University of New South Wales), Kailong Wang (Huazhong University of Science and Technology), Ying Zhang (Virginia Tech), Zefeng Li (Nanyang Technological University), Haoyu Wang (Huazhong University of Science and Technology), Tianwei Zhang (Nanyang Technological University), Yang Liu (Nanyang Technological University)

Read More

Group-based Robustness: A General Framework for Customized Robustness in...

Weiran Lin (Carnegie Mellon University), Keane Lucas (Carnegie Mellon University), Neo Eyal (Tel Aviv University), Lujo Bauer (Carnegie Mellon University), Michael K. Reiter (Duke University), Mahmood Sharif (Tel Aviv University)

Read More

Parrot-Trained Adversarial Examples: Pushing the Practicality of Black-Box Audio...

Rui Duan (University of South Florida), Zhe Qu (Central South University), Leah Ding (American University), Yao Liu (University of South Florida), Zhuo Lu (University of South Florida)

Read More