Cheng Zhang (Hunan University), Yang Xu (Hunan University), Jianghao Tan (Hunan University), Jiajie An (Hunan University), Wenqiang Jin (Hunan University)

Clustered federated learning (CFL) serves as a promising framework to address the challenges of non-IID (non-Independent and Identically Distributed) data and heterogeneity in federated learning. It involves grouping clients into clusters based on the similarity of their data distributions or model updates. However, classic CFL frameworks pose severe threats to clients' privacy since the honest-but-curious server can easily know the bias of clients' data distributions (its preferences). In this work, we propose a privacy-enhanced clustered federated learning framework, MingledPie, aiming to resist against servers' preference profiling capabilities by allowing clients to be grouped into multiple clusters spontaneously. Specifically, within a given cluster, we mingled two types of clients in which a major type of clients share similar data distributions while a small portion of them do not (false positive clients). Such that, the CFL server fails to link clients' data preferences based on their belonged cluster categories. To achieve this, we design an indistinguishable cluster identity generation approach to enable clients to form clusters with a certain proportion of false positive members without the assistance of a CFL server. Meanwhile, training with mingled false positive clients will inevitably degrade the performances of the cluster's global model. To rebuild an accurate cluster model, we represent the mingled cluster models as a system of linear equations consisting of the accurate models and solve it. Rigid theoretical analyses are conducted to evaluate the usability and security of the proposed designs. In addition, extensive evaluations of MingledPie on six open-sourced datasets show that it defends against preference profiling attacks with an accuracy of 69.4% on average. Besides, the model accuracy loss is limited to between 0.02% and 3.00%.

View More Papers

Automated Mass Malware Factory: The Convergence of Piggybacking and...

Heng Li (Huazhong University of Science and Technology), Zhiyuan Yao (Huazhong University of Science and Technology), Bang Wu (Huazhong University of Science and Technology), Cuiying Gao (Huazhong University of Science and Technology), Teng Xu (Huazhong University of Science and Technology), Wei Yuan (Huazhong University of Science and Technology), Xiapu Luo (The Hong Kong Polytechnic University)

Read More

Try to Poison My Deep Learning Data? Nowhere to...

Yansong Gao (The University of Western Australia), Huaibing Peng (Nanjing University of Science and Technology), Hua Ma (CSIRO's Data61), Zhi Zhang (The University of Western Australia), Shuo Wang (Shanghai Jiao Tong University), Rayne Holland (CSIRO's Data61), Anmin Fu (Nanjing University of Science and Technology), Minhui Xue (CSIRO's Data61), Derek Abbott (The University of Adelaide, Australia)

Read More

LightAntenna: Characterizing the Limits of Fluorescent Lamp-Induced Electromagnetic Interference

Fengchen Yang (Zhejiang University), Wenze Cui (Zhejiang University), Xinfeng Li (Zhejiang University), Chen Yan (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More

Detecting IMSI-Catchers by Characterizing Identity Exposing Messages in Cellular...

Tyler Tucker (University of Florida), Nathaniel Bennett (University of Florida), Martin Kotuliak (ETH Zurich), Simon Erni (ETH Zurich), Srdjan Capkun (ETH Zuerich), Kevin Butler (University of Florida), Patrick Traynor (University of Florida)

Read More