Shanghao Shi (Virginia Tech), Ning Wang (University of South Florida), Yang Xiao (University of Kentucky), Chaoyu Zhang (Virginia Tech), Yi Shi (Virginia Tech), Y. Thomas Hou (Virginia Polytechnic Institute and State University), Wenjing Lou (Virginia Polytechnic Institute and State University)

Federated learning is known for its capability to safeguard the participants' data privacy. However, recently emerged model inversion attacks (MIAs) have shown that a malicious parameter server can reconstruct individual users' local data samples from model updates. The state-of-the-art attacks either rely on computation-intensive iterative optimization methods to reconstruct each input batch, making scaling difficult, or involve the malicious parameter server adding extra modules before the global model architecture, rendering the attacks too conspicuous and easily detectable.

To overcome these limitations, we propose Scale-MIA, a novel MIA capable of efficiently and accurately reconstructing local training samples from the aggregated model updates, even when the system is protected by a robust secure aggregation (SA) protocol. Scale-MIA utilizes the inner architecture of models and identifies the latent space as the critical layer for breaching privacy. Scale-MIA decomposes the complex reconstruction task into an innovative two-step process. The first step is to reconstruct the latent space representations (LSRs) from the aggregated model updates using a closed-form inversion mechanism, leveraging specially crafted linear layers. Then in the second step, the LSRs are fed into a fine-tuned generative decoder to reconstruct the whole input batch.

We implemented Scale-MIA on commonly used machine learning models and conducted comprehensive experiments across various settings. The results demonstrate that Scale-MIA achieves excellent performance on different datasets, exhibiting high reconstruction rates, accuracy, and attack efficiency on a larger scale compared to state-of-the-art MIAs. Our code is available at https://github.com/unknown123489/Scale-MIA.

View More Papers

GhostShot: Manipulating the Image of CCD Cameras with Electromagnetic...

Yanze Ren (Zhejiang University), Qinhong Jiang (Zhejiang University), Chen Yan (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More

HADES Attack: Understanding and Evaluating Manipulation Risks of Email...

Ruixuan Li (Tsinghua University), Chaoyi Lu (Tsinghua University), Baojun Liu (Tsinghua University;Zhongguancun Laboratory), Yunyi Zhang (Tsinghua University), Geng Hong (Fudan University), Haixin Duan (Tsinghua University;Zhongguancun Laboratory), Yanzhong Lin (Coremail Technology Co. Ltd), Qingfeng Pan (Coremail Technology Co. Ltd), Min Yang (Fudan University), Jun Shao (Zhejiang Gongshang University)

Read More

Can a Cybersecurity Question Answering Assistant Help Change User...

Lea Duesterwald (Carnegie Mellon University), Ian Yang (Carnegie Mellon University), Norman Sadeh (Carnegie Mellon University)

Read More

Detecting IMSI-Catchers by Characterizing Identity Exposing Messages in Cellular...

Tyler Tucker (University of Florida), Nathaniel Bennett (University of Florida), Martin Kotuliak (ETH Zurich), Simon Erni (ETH Zurich), Srdjan Capkun (ETH Zuerich), Kevin Butler (University of Florida), Patrick Traynor (University of Florida)

Read More