Torsten Krauß (University of Würzburg), Jan König (University of Würzburg), Alexandra Dmitrienko (University of Wuerzburg), Christian Kanzow (University of Würzburg)

Federated Learning (FL) enables the training of machine learning models using distributed data. This approach offers benefits such as improved data privacy, reduced communication costs, and enhanced model performance through increased data diversity. However, FL systems are vulnerable to poisoning attacks, where adversaries introduce malicious updates to compromise the integrity of the aggregated model. Existing defense strategies against such attacks include filtering, influence reduction, and robust aggregation techniques. Filtering approaches have the advantage of not reducing classification accuracy, but face the challenge of adversaries adapting to the defense mechanisms. The lack of a universally accepted definition of "adaptive adversaries" in the literature complicates the assessment of detection capabilities and meaningful comparisons of FL defenses.

In this paper, we address the limitations of the commonly used definition of "adaptive attackers" proposed by Bagdasaryan et al. We propose AutoAdapt, a novel adaptation method that leverages an Augmented Lagrangian optimization technique. AutoAdapt eliminates the manual search for optimal hyper-parameters by providing a more rational alternative. It generates more effective solutions by accommodating multiple inequality constraints, allowing adaptation to valid value ranges within the defensive metrics. Our proposed method significantly enhances adversaries' capabilities and accelerates research in developing attacks and defenses. By accommodating multiple valid range constraints and adapting to diverse defense metrics, AutoAdapt challenges defenses relying on multiple metrics and expands the range of potential adversarial behaviors. Through comprehensive studies, we demonstrate the effectiveness of AutoAdapt in simultaneously adapting to multiple constraints and showcasing its power by accelerating the performance of tests by a factor of 15. Furthermore, we establish the versatility of AutoAdapt across various application scenarios, encompassing datasets, model architectures, and hyper-parameters, emphasizing its practical utility in real-world contexts. Overall, our contributions advance the evaluation of FL defenses and drive progress in this field.

View More Papers

AAKA: An Anti-Tracking Cellular Authentication Scheme Leveraging Anonymous Credentials

Hexuan Yu (Virginia Polytechnic Institute and State University), Changlai Du (Virginia Polytechnic Institute and State University), Yang Xiao (University of Kentucky), Angelos Keromytis (Georgia Institute of Technology), Chonggang Wang (InterDigital), Robert Gazda (InterDigital), Y. Thomas Hou (Virginia Polytechnic Institute and State University), Wenjing Lou (Virginia Polytechnic Institute and State University)

Read More

WIP: Modeling and Detecting Falsified Vehicle Trajectories Under Data...

Jun Ying, Yiheng Feng (Purdue University), Qi Alfred Chen (University of California, Irvine), Z. Morley Mao (University of Michigan and Google)

Read More

DorPatch: Distributed and Occlusion-Robust Adversarial Patch to Evade Certifiable...

Chaoxiang He (Huazhong University of Science and Technology), Xiaojing Ma (Huazhong University of Science and Technology), Bin B. Zhu (Microsoft Research), Yimiao Zeng (Huazhong University of Science and Technology), Hanqing Hu (Huazhong University of Science and Technology), Xiaofan Bai (Huazhong University of Science and Technology), Hai Jin (Huazhong University of Science and Technology), Dongmei Zhang…

Read More

Abusing the Ethereum Smart Contract Verification Services for Fun...

Pengxiang Ma (Huazhong University of Science and Technology), Ningyu He (Peking University), Yuhua Huang (Huazhong University of Science and Technology), Haoyu Wang (Huazhong University of Science and Technology), Xiapu Luo (The Hong Kong Polytechnic University)

Read More