Shengwei An (Purdue University), Guanhong Tao (Purdue University), Qiuling Xu (Purdue University), Yingqi Liu (Purdue University), Guangyu Shen (Purdue University), Yuan Yao (Nanjing University), Jingwei Xu (Nanjing University), Xiangyu Zhang (Purdue University)

Model inversion reverse-engineers input samples from a given model, and hence poses serious threats to information confidentiality. We propose a novel inversion technique based on StyleGAN, whose generator has a special architecture that forces the decomposition of an input to styles of various granularities such that the model can learn them separately in training. During sample generation, the generator transforms a latent value to parameters controlling these styles to compose a sample. In our inversion, given a target label of some subject model to invert (e.g., a private face based identity recognition model), our technique leverages a StyleGAN trained on public data from the same domain (e.g., a public human face dataset), uses gradient descent or genetic search algorithm, together with distribution based clipping, to find a proper parameterization of the styles such that the generated sample is correctly classified to the target label (by the subject model) and recognized by humans. The results show that our inverted samples have high fidelity, substantially better than those by existing state-of-the-art techniques.

View More Papers

Hazard Integrated: Understanding Security Risks in App Extensions to...

Mingming Zha (Indiana University Bloomington), Jice Wang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences), Yuhong Nan (Sun Yat-sen University), Xiaofeng Wang (Indiana Unversity Bloomington), Yuqing Zhang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences), Zelin Yang (National Computer Network Intrusion Protection Center, University of Chinese Academy…

Read More

Repttack: Exploiting Cloud Schedulers to Guide Co-Location Attacks

Chongzhou Fang (University of California, Davis), Han Wang (University of California, Davis), Najmeh Nazari (University of California, Davis), Behnam Omidi (George Mason University), Avesta Sasan (University of California, Davis), Khaled N. Khasawneh (George Mason University), Setareh Rafatirad (University of California, Davis), Houman Homayoun (University of California, Davis)

Read More

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep...

Phillip Rieger (Technical University of Darmstadt), Thien Duc Nguyen (Technical University of Darmstadt), Markus Miettinen (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More