Phillip Rieger (Technical University of Darmstadt), Thien Duc Nguyen (Technical University of Darmstadt), Markus Miettinen (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data. Recently, several targeted poisoning attacks against FL have been introduced. These attacks inject a backdoor into the resulting model that allows adversary-controlled inputs to be misclassified. Existing countermeasures against backdoor attacks are inefficient and often merely aim to exclude deviating models from the aggregation. However, this approach also removes benign models of clients with deviating data distributions, causing the aggregated model to perform poorly for such clients.

To address this problem, we propose emph{DeepSight}, a novel model filtering approach for mitigating backdoor attacks. It is based on three novel techniques that allow to characterize the distribution of data used to train model updates and seek to measure fine-grained differences in the internal structure and outputs of NNs. Using these techniques,
DeepSight can identify suspicious model updates. We also develop a scheme that can accurately cluster model updates. Combining the results of both components, DeepSight is able to identify and eliminate model clusters containing poisoned models with high attack impact. We also show that the backdoor contributions of possibly undetected poisoned models can be effectively mitigated with existing weight clipping-based defenses. We evaluate the performance and effectiveness of DeepSight and show that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model's performance on benign data.

View More Papers

Property Inference Attacks Against GANs

Junhao Zhou (Xi'an Jiaotong University), Yufei Chen (Xi'an Jiaotong University), Chao Shen (Xi'an Jiaotong University), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More

PASS: A System-Driven Evaluation Platform for Autonomous Driving Safety...

Zhisheng Hu (Baidu Security), Junjie Shen (UC Irvine), Shengjian Guo (Baidu Security), Xinyang Zhang (Baidu Security), Zhenyu Zhong (Baidu Security), Qi Alfred Chen (UC Irvine) and Kang Li (Baidu Security)

Read More

Interpretable Federated Transformer Log Learning for Cloud Threat Forensics

Gonzalo De La Torre Parra (University of the Incarnate Word, TX, USA), Luis Selvera (Secure AI and Autonomy Lab, The University of Texas at San Antonio, TX, USA), Joseph Khoury (The Cyber Center For Security and Analytics, University of Texas at San Antonio, TX, USA), Hector Irizarry (Raytheon, USA), Elias Bou-Harb (The Cyber Center For…

Read More

PHYjacking: Physical Input Hijacking for Zero-Permission Authorization Attacks on...

Xianbo Wang (The Chinese University of Hong Kong), Shangcheng Shi (The Chinese University of Hong Kong), Yikang Chen (The Chinese University of Hong Kong), Wing Cheong Lau (The Chinese University of Hong Kong)

Read More