Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data. Recently, several targeted poisoning attacks against FL have been introduced. These attacks inject a backdoor into the resulting model that allows adversary-controlled inputs to be misclassified. Existing countermeasures against backdoor attacks are inefficient and often merely aim to exclude deviating models from the aggregation. However, this approach also removes benign models of clients with deviating data distributions, causing the aggregated model to perform poorly for such clients.

To address this problem, we propose emph{DeepSight}, a novel model filtering approach for mitigating backdoor attacks. It is based on three novel techniques that allow to characterize the distribution of data used to train model updates and seek to measure fine-grained differences in the internal structure and outputs of NNs. Using these techniques,
DeepSight can identify suspicious model updates. We also develop a scheme that can accurately cluster model updates. Combining the results of both components, DeepSight is able to identify and eliminate model clusters containing poisoned models with high attack impact. We also show that the backdoor contributions of possibly undetected poisoned models can be effectively mitigated with existing weight clipping-based defenses. We evaluate the performance and effectiveness of DeepSight and show that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model's performance on benign data.

View More Papers

GhostTalk: Interactive Attack on Smartphone Voice System Through Power...

Yuanda Wang (Michigan State University), Hanqing Guo (Michigan State University), Qiben Yan (Michigan State University)

Read More

ATTEQ-NN: Attention-based QoE-aware Evasive Backdoor Attacks

Xueluan Gong (Wuhan University), Yanjiao Chen (Zhejiang University), Jianshuo Dong (Wuhan University), Qian Wang (Wuhan University)

Read More

Vehicle Lateral Motion Stability Under Wheel Lockup Attacks

Alireza Mohammadi (University of Michigan-Dearborn) and Hafiz Malik (University of Michigan-Dearborn)

Read More

What You See is Not What the Network Infers:...

Yijun Yang (The Chinese University of Hong Kong), Ruiyuan Gao (The Chinese University of Hong Kong), Yu Li (The Chinese...

Read More