Cheng Zhang (Hunan University), Yang Xu (Hunan University), Jianghao Tan (Hunan University), Jiajie An (Hunan University), Wenqiang Jin (Hunan University)

Clustered federated learning (CFL) serves as a promising framework to address the challenges of non-IID (non-Independent and Identically Distributed) data and heterogeneity in federated learning. It involves grouping clients into clusters based on the similarity of their data distributions or model updates. However, classic CFL frameworks pose severe threats to clients' privacy since the honest-but-curious server can easily know the bias of clients' data distributions (its preferences). In this work, we propose a privacy-enhanced clustered federated learning framework, MingledPie, aiming to resist against servers' preference profiling capabilities by allowing clients to be grouped into multiple clusters spontaneously. Specifically, within a given cluster, we mingled two types of clients in which a major type of clients share similar data distributions while a small portion of them do not (false positive clients). Such that, the CFL server fails to link clients' data preferences based on their belonged cluster categories. To achieve this, we design an indistinguishable cluster identity generation approach to enable clients to form clusters with a certain proportion of false positive members without the assistance of a CFL server. Meanwhile, training with mingled false positive clients will inevitably degrade the performances of the cluster's global model. To rebuild an accurate cluster model, we represent the mingled cluster models as a system of linear equations consisting of the accurate models and solve it. Rigid theoretical analyses are conducted to evaluate the usability and security of the proposed designs. In addition, extensive evaluations of MingledPie on six open-sourced datasets show that it defends against preference profiling attacks with an accuracy of 69.4% on average. Besides, the model accuracy loss is limited to between 0.02% and 3.00%.

View More Papers

QMSan: Efficiently Detecting Uninitialized Memory Errors During Fuzzing

Matteo Marini (Sapienza University of Rome), Daniele Cono D'Elia (Sapienza University of Rome), Mathias Payer (EPFL), Leonardo Querzoni (Sapienza University of Rome)

Read More

Heimdall: Towards Risk-Aware Network Management Outsourcing

Yuejie Wang (Peking University), Qiutong Men (New York University), Yongting Chen (New York University Shanghai), Jiajin Liu (New York University Shanghai), Gengyu Chen (Carnegie Mellon University), Ying Zhang (Meta), Guyue Liu (Peking University), Vyas Sekar (Carnegie Mellon University)

Read More

Ghidra: Is Newer Always Better?

Jonathan Crussell (Sandia National Laboratories)

Read More

A Comprehensive Memory Safety Analysis of Bootloaders

Jianqiang Wang (CISPA Helmholtz Center for Information Security), Meng Wang (CISPA Helmholtz Center for Information Security), Qinying Wang (Zhejiang University), Nils Langius (Leibniz Universität Hannover), Li Shi (ETH Zurich), Ali Abbasi (CISPA Helmholtz Center for Information Security), Thorsten Holz (CISPA Helmholtz Center for Information Security)

Read More