Jiayun Fu (Huazhong University of Science and Technology), Xiaojing Ma (Huazhong University of Science and Technology), Bin B. Zhu (Microsoft Research Asia), Pingyi Hu (Huazhong University of Science and Technology), Ruixin Zhao (Huazhong University of Science and Technology), Yaru Jia (Huazhong University of Science and Technology), Peng Xu (Huazhong University of Science and Technology), Hai…

Split learning is privacy-preserving distributed learning that has gained momentum recently. It also faces new security challenges. FSHA is a serious threat to split learning. In FSHA, a malicious server hijacks training to trick clients to train the encoder of an autoencoder instead of a classification model. Intermediate results sent to the server by a client are actually latent codes of private training samples, which can be reconstructed with high fidelity from the received codes with the decoder of the autoencoder. SplitGuard is the only existing effective defense against hijacking attacks. It is an active method that injects falsely labeled data to incur abnormal behaviors to detect hijacking attacks. Such injection also incurs an adverse impact on honest training of intended models.

In this paper, we first show that SplitGuard is vulnerable to an adaptive hijacking attack named SplitSpy. SplitSpy exploits the same property that SplitGuard exploits to detect hijacking attacks. In SplitSpy, a malicious server maintains a shadow model that performs the intended task to detect falsely labeled data and evade SplitGuard. Our experimental evaluation indicates that SplitSpy can effectively evade SplitGuard. Then we propose a novel passive detection method, named Gradients Scrutinizer, which relies on intrinsic differences between gradients from an intended model and those from a malicious model: the expected similarity among gradients of same-label samples differs from the expected similarity among gradients of different-label samples for an intended model, while they are the same for a malicious model. This intrinsic distinguishability enables Gradients Scrutinizer to effectively detect split-learning hijacking attacks without tampering with honest training of intended models. Our extensive evaluation indicates that Gradients Scrutinizer can effectively thwart both known split-learning hijacking attacks and adaptive counterattacks against it.

View More Papers

Access Your Tesla without Your Awareness: Compromising Keyless Entry...

Xinyi Xie (Shanghai Fudan Microelectronics Group Co., Ltd.), Kun Jiang (Shanghai Fudan Microelectronics Group Co., Ltd.), Rui Dai (Shanghai Fudan Microelectronics Group Co., Ltd.), Jun Lu (Shanghai Fudan Microelectronics Group Co., Ltd.), Lihui Wang (Shanghai Fudan Microelectronics Group Co., Ltd.), Qing Li (State Key Laboratory of ASIC & System, Fudan University), Jun Yu (State Key…

Read More

GPS Spoofing Attack Detection on Intersection Movement Assist using...

Jun Ying (Purdue University), Yiheng Feng (Purdue University), Qi Alfred Chen (University of California, Irvine), Z. Morley Mao (University of Michigan)

Read More

Cloud-Hosted Security Operations Center (SOC)

Drew Walsh, Kevin Conklin (Deloitte)

Read More

An Exploratory study of Malicious Link Posting on Social...

Muhammad Hassan, Mahnoor Jameel, Masooda Bashir (University of Illinois at Urbana Champaign)

Read More