Zitao Chen (University of British Columbia), Karthik Pattabiraman (University of British Columbia)

Modern machine learning (ML) ecosystems offer a surging number of ML frameworks and code repositories that can greatly facilitate the development of ML models. Today, even ordinary data holders who are not ML experts can apply off-the-shelf codebase to build high-performance ML models on their data, many of which are sensitive in nature (e.g., clinical records).

In this work, we consider a malicious ML provider who supplies model-training code to the data holders, does not have access to the training process, and has only black-box query access to the resulting model. In this setting, we demonstrate a new form of membership inference attack that is strictly more powerful than prior art. Our attack empowers the adversary to reliably de-identify all the training samples (average >99% attack [email protected]% FPR), and the compromised models still maintain competitive performance as their uncorrupted counterparts (average <1% accuracy drop). Moreover, we show that the poisoned models can effectively disguise the amplified membership leakage under common membership privacy auditing, which can only be revealed by a set of secret samples known by the adversary. Overall, our study not only points to the worst-case membership privacy leakage, but also unveils a common pitfall underlying existing privacy auditing methods, which calls for future efforts to rethink the current practice of auditing membership privacy in machine learning models.

View More Papers

TWINFUZZ: Differential Testing of Video Hardware Acceleration Stacks

Matteo Leonelli (CISPA Helmholtz Center for Information Security), Addison Crump (CISPA Helmholtz Center for Information Security), Meng Wang (CISPA Helmholtz Center for Information Security), Florian Bauckholt (CISPA Helmholtz Center for Information Security), Keno Hassler (CISPA Helmholtz Center for Information Security), Ali Abbasi (CISPA Helmholtz Center for Information Security), Thorsten Holz (CISPA Helmholtz Center for Information…

Read More

DeFiIntel: A Dataset Bridging On-Chain and Off-Chain Data for...

Iori Suzuki (Graduate School of Environment and Information Sciences, Yokohama National University), Yin Minn Pa Pa (Institute of Advanced Sciences, Yokohama National University), Nguyen Thi Van Anh (Institute of Advanced Sciences, Yokohama National University), Katsunari Yoshioka (Graduate School of Environment and Information Sciences, Yokohama National University)

Read More

Kronos: A Secure and Generic Sharding Blockchain Consensus with...

Yizhong Liu (Beihang University), Andi Liu (Beihang University), Yuan Lu (Institute of Software Chinese Academy of Sciences), Zhuocheng Pan (Beihang University), Yinuo Li (Xi’an Jiaotong University), Jianwei Liu (Beihang University), Song Bian (Beihang University), Mauro Conti (University of Padua)

Read More

AlphaDog: No-Box Camouflage Attacks via Alpha Channel Oversight

Qi Xia (University of Texas at San Antonio), Qian Chen (University of Texas at San Antonio)

Read More