Torsten Krauß (University of Würzburg), Jan König (University of Würzburg), Alexandra Dmitrienko (University of Wuerzburg), Christian Kanzow (University of Würzburg)

Federated Learning (FL) enables the training of machine learning models using distributed data. This approach offers benefits such as improved data privacy, reduced communication costs, and enhanced model performance through increased data diversity. However, FL systems are vulnerable to poisoning attacks, where adversaries introduce malicious updates to compromise the integrity of the aggregated model. Existing defense strategies against such attacks include filtering, influence reduction, and robust aggregation techniques. Filtering approaches have the advantage of not reducing classification accuracy, but face the challenge of adversaries adapting to the defense mechanisms. The lack of a universally accepted definition of "adaptive adversaries" in the literature complicates the assessment of detection capabilities and meaningful comparisons of FL defenses.

In this paper, we address the limitations of the commonly used definition of "adaptive attackers" proposed by Bagdasaryan et al. We propose AutoAdapt, a novel adaptation method that leverages an Augmented Lagrangian optimization technique. AutoAdapt eliminates the manual search for optimal hyper-parameters by providing a more rational alternative. It generates more effective solutions by accommodating multiple inequality constraints, allowing adaptation to valid value ranges within the defensive metrics. Our proposed method significantly enhances adversaries' capabilities and accelerates research in developing attacks and defenses. By accommodating multiple valid range constraints and adapting to diverse defense metrics, AutoAdapt challenges defenses relying on multiple metrics and expands the range of potential adversarial behaviors. Through comprehensive studies, we demonstrate the effectiveness of AutoAdapt in simultaneously adapting to multiple constraints and showcasing its power by accelerating the performance of tests by a factor of 15. Furthermore, we establish the versatility of AutoAdapt across various application scenarios, encompassing datasets, model architectures, and hyper-parameters, emphasizing its practical utility in real-world contexts. Overall, our contributions advance the evaluation of FL defenses and drive progress in this field.

View More Papers

Towards Precise Reporting of Cryptographic Misuses

Yikang Chen (The Chinese University of Hong Kong), Yibo Liu (Arizona State University), Ka Lok Wu (The Chinese University of Hong Kong), Duc V Le (Visa Research), Sze Yiu Chau (The Chinese University of Hong Kong)

Read More

Leaking the Privacy of Groups and More: Understanding Privacy...

Jiangrong Wu (Sun Yat-sen University), Yuhong Nan (Sun Yat-sen University), Luyi Xing (Indiana University Bloomington), Jiatao Cheng (Sun Yat-sen University), Zimin Lin (Alibaba Group), Zibin Zheng (Sun Yat-sen University), Min Yang (Fudan University)

Read More

SOC Service Areas: Identification, Prioritization, and Implementation

Christopher Rodman, Breanna Kraus, Justin Novak (SEI/CERT)

Read More

SOCs lead AI adoption: Transitioning Lessons to the C-Suite

Eric Dull, Drew Walsh, Scott Riede (Deloitte and Touche)

Read More