Shengwei An (Purdue University), Guanhong Tao (Purdue University), Qiuling Xu (Purdue University), Yingqi Liu (Purdue University), Guangyu Shen (Purdue University); Yuan Yao (Nanjing University), Jingwei Xu (Nanjing University), Xiangyu Zhang (Purdue University)

Model inversion reverse-engineers input samples from a given model, and hence poses serious threats to information confidentiality. We propose a novel inversion technique based on StyleGAN, whose generator has a special architecture that forces the decomposition of an input to styles of various granularities such that the model can learn them separately in training. During sample generation, the generator transforms a latent value to parameters controlling these styles to compose a sample. In our inversion, given a target label of some subject model to invert (e.g., a private face based identity recognition model), our technique leverages a StyleGAN trained on public data from the same domain (e.g., a public human face dataset), uses the gradient descent or genetic search algorithm, together with distribution based clipping, to find a proper parameterization of the styles such that the generated sample is correctly classified to the target label (by the subject model) and recognized by humans. The results show that our inverted samples have high fidelity, substantially better than those by existing state-of-the-art techniques.

View More Papers

RamBoAttack: A Robust and Query Efficient Deep Neural Network...

Viet Quoc Vo (The University of Adelaide), Ehsan Abbasnejad (The University of Adelaide), Damith C. Ranasinghe (University of Adelaide)

Read More

Demo #7: A Simulator for Cooperative and Automated Driving...

Mohammed Lamine Bouchouia (Telecom Paris - Institut Polytechnique de Paris), Jean-Philippe Monteuuis (Qualcomm Technologies Inc), Houda Labiod (Telecom Paris - Institut Polytechnique de Paris), Ons Jelassi (Telecom Paris - Institut Polytechnique de Paris), Wafa Ben Jaballah (Thales) and Jonathan Petit (Qualcomm Technologies Inc)

Read More

Demo #13: Attacking LiDAR Semantic Segmentation in Autonomous Driving

Yi Zhu (State University of New York at Buffalo), Chenglin Miao (University of Georgia), Foad Hajiaghajani (State University of New York at Buffalo), Mengdi Huai (University of Virginia), Lu Su (Purdue University) and Chunming Qiao (State University of New York at Buffalo)

Read More

Get a Model! Model Hijacking Attack Against Machine Learning...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More