Eden Luzon (Ben Gurion University of the Negev), Guy Amit (Ben-Gurion University & IBM Research), Roy Weiss (Ben Gurion University of the Negev), Torsten Krauß (University of Würzburg), Alexandra Dmitrienko (University of Würzburg), Yisroel Mirsky (Ben Gurion University of the Negev)

Neural networks are often trained on proprietary datasets, making them attractive attack targets. We present a novel dataset extraction method leveraging an innovative training-time backdoor attack, allowing a malicious federated learning (FL) server to systematically and deterministically extract complete client training samples through a simple indexing process. Unlike prior techniques, our approach guarantees exact data recovery rather than probabilistic reconstructions or hallucinations, provides precise control over which samples are memorized and how many, and shows high capacity and robustness. Infected models output data samples when they receive a pattern-based index trigger, enabling systematic extraction of meaningful patches from each client’s local data without disrupting global model utility. To address small model output sizes, we extract patches and then recombined them.

The attack requires only a minor modification to the training code that can easily evade detection during client-side verification. Hence, this vulnerability represents a realistic FL supply-chain threat, where a malicious server can distribute modified training code to clients and later recover private data from their updates. Evaluations across classifiers, segmentation models, and large language models demonstrate that thousands of sensitive training samples can be recovered from client models with minimal impact on task performance, and a client's entire dataset can be stolen after multiple FL rounds. For instance, a medical segmentation dataset can be extracted with only a 3% utility drop. These findings expose a critical privacy vulnerability in FL systems, emphasizing the need for stronger integrity and transparency in distributed training pipelines.

View More Papers

LAPSE: Automatic, Formal Fault-Tolerant Correctness Proofs for Native Code

Charles Averill, Ilan Buzzetti (The University of Texas at Dallas), Alex Bellon (UC San Diego), Kevin Hamlen (The University of Texas at Dallas)

Read More

Achieving Interpretable DL-based Web Attack Detection through Malicious Payload...

Peiyang Li (Tsinghua University & Ant Group), Fukun Mei (Tsinghua University), Ye Wang (Tsinghua University), Zhuotao Liu (Tsinghua University), Ke Xu (Tsinghua University & Zhongguancun Laboratory), Chao Shen (Xi'an Jiaotong University), Qian Wang (Wuhan University), Qi Li (Tsinghua University & Zhongguancun Laboratory)

Read More

User Experiences with Suspicious Emails in Virtual Reality Headsets:...

Filipo Sharevski (DePaul University), Jennifer Vander Loop (DePaul University), Sarah Ferguson (DePaul University), Viktorija Paneva (LMU Munich)

Read More