Ahmed Salem (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (Swiss Data Science Center, ETH Zurich/EPFL), Pascal Berrang (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications.

However, the early demonstrations of the feasibility of such attacks have many assumptions on the adversary, such as using multiple so-called shadow models, knowledge of the target model structure, and having a dataset from the same distribution as the target model's training data. We relax all these key assumptions, thereby showing that such attacks are very broadly applicable at low cost and thereby pose a more severe risk than previously thought. We present the most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains.

In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.

View More Papers

Balancing Image Privacy and Usability with Thumbnail-Preserving Encryption

Kimia Tajik (Oregon State University), Akshith Gunasekaran (Oregon State University), Rhea Dutta (Cornell University), Brandon Ellis (Oregon State University), Rakesh B. Bobba (Oregon State University), Mike Rosulek (Oregon State University), Charles V. Wright (Portland State University), Wu-Chi Feng (Portland State University)

Read More

Anonymous Multi-Hop Locks for Blockchain Scalability and Interoperability

Giulio Malavolta (Friedrich-Alexander University Erlangen-Nürnberg), Pedro Moreno Sanchez (TU Wien), Clara Schneidewind (TU Wien), Aniket Kate (Purdue University), Matteo Maffei (TU Wien)

Read More

Automating Patching of Vulnerable Open-Source Software Versions in Application...

Ruian Duan (Georgia Institute of Technology), Ashish Bijlani (Georgia Institute of Technology), Yang Ji (Georgia Institute of Technology), Omar Alrawi (Georgia Institute of Technology), Yiyuan Xiong (Peking University), Moses Ike (Georgia Institute of Technology), Brendan Saltaformaggio (Georgia Institute of Technology), Wenke Lee (Georgia Institute of Technology)

Read More

Giving State to the Stateless: Augmenting Trustworthy Computation with...

Gabriel Kaptchuk (Johns Hopkins University), Matthew Green (Johns Hopkins University), Ian Miers (Cornell Tech)

Read More