Bo Hui (The Johns Hopkins University), Yuchen Yang (The Johns Hopkins University), Haolin Yuan (The Johns Hopkins University), Philippe Burlina (The Johns Hopkins University Applied Physics Laboratory), Neil Zhenqiang Gong (Duke University), Yinzhi Cao (The Johns Hopkins University)

Membership inference (MI) attacks affect user privacy by inferring whether given data samples have been used to train a target learning model, e.g., a deep neural network. There are two types of MI attacks in the literature, i.e., these with and without shadow models. The success of the former heavily depends on the quality of the shadow model, i.e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information.

In this paper, we propose an MI attack, called BLINDMI, which probes the target model and extracts membership semantics via a novel approach, called differential comparison. The high-level idea is that BLINDMI first generates a dataset with nonmembers via transforming existing samples into new samples, and then differentially moves samples from a target dataset to the generated, non-member set in an iterative manner. If the differential move of a sample increases the set distance, BLINDMI considers the sample as non-member and vice versa.

BLINDMI was evaluated by comparing it with state-of-the-art MI attack algorithms. Our evaluation shows that BLINDMI improves F1-score by nearly 20% when compared to state-of-the-art on some datasets, such as Purchase-50 and Birds-200, in the blind setting where the adversary does not know the target model’s architecture and the target dataset’s ground truth labels. We also show that BLINDMI can defeat state-of-the-art defenses.

View More Papers

Доверя́й, но проверя́й: SFI safety for native-compiled Wasm

Evan Johnson (University of California San Diego), David Thien (University of California San Diego), Yousef Alhessi (University of California San Diego), Shravan Narayan (University Of California San Diego), Fraser Brown (Stanford University), Sorin Lerner (University of California San Diego), Tyler McMullen (Fastly Labs), Stefan Savage (University of California San Diego), Deian Stefan (University of California…

Read More

Shadow Attacks: Hiding and Replacing Content in Signed PDFs

Christian Mainka (Ruhr University Bochum), Vladislav Mladenov (Ruhr University Bochum), Simon Rohlmann (Ruhr University Bochum)

Read More

Screen Gleaning: Receiving and Interpreting Pixels by Eavesdropping on...

Zhuoran Liu, Léo Weissbart, Dirk Lauret (Radboud University)

Read More

Demo #4: Attacking Tesla Model X’s Autopilot Using Compromised...

Ben Nassi (Ben-Gurion University of the Negev), Yisroel Mirsky (Ben-Gurion University of the Negev, Georgia Tech), Dudi Nassi, Raz Ben Netanel (Ben-Gurion University of the Negev), Oleg Drokin (Independent Researcher), and Yuval Elovici (Ben-Gurion University of the Negev) Best Demo Award Winner ($300 cash prize)!

Read More