Zitao Chen (University of British Columbia), Karthik Pattabiraman (University of British Columbia)

Machine learning (ML) models are vulnerable to membership inference attacks (MIAs), which determine whether a given input is used for training the target model. While there have been many efforts to mitigate MIAs, they often suffer from limited privacy protection, large accuracy drop, and/or requiring additional data that may be difficult to acquire.

This work proposes a defense technique, HAMP that can achieve both strong membership privacy and high accuracy, without requiring extra data. To mitigate MIAs in different forms, we observe that they can be unified as they all exploit the ML model’s overconfidence in predicting training samples through different proxies. This motivates our design to enforce less confident prediction by the model, hence forcing the model to behave similarly on the training and testing samples. HAMP consists of a novel training framework with high-entropy soft labels and an entropy-based regularizer to constrain the model’s prediction while still achieving high accuracy. To further reduce privacy risk, HAMP uniformly modifies all the prediction outputs to become low-confidence outputs while preserving the accuracy, which effectively obscures the differences between the prediction on members and non-members.

We conduct extensive evaluation on five benchmark datasets, and show that HAMP provides consistently high accuracy and strong membership privacy. Our comparison with seven state-of- the-art defenses shows that HAMP achieves a superior privacy- utility trade off than those techniques.

View More Papers

Transforming Raw Authentication Logs into Interpretable Events

Seth Hastings, Tyler Moore, Corey Bolger, Philip Schumway (University of Tulsa)

Read More

Invisible Reflections: Leveraging Infrared Laser Reflections to Target Traffic...

Takami Sato (University of California Irvine), Sri Hrushikesh Varma Bhupathiraju (University of Florida), Michael Clifford (Toyota InfoTech Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Read More

Vision: Towards Fully Shoulder-Surfing Resistant and Usable Authentication for...

Tobias Länge (Karlsruhe Institute of Technology), Philipp Matheis (Karlsruhe Institute of Technology), Reyhan Düzgün (Ruhr University Bochum), Melanie Volkamer (Karlsruhe Institute of Technology), Peter Mayer (Karlsruhe Institute of Technology, University of Southern Denmark)

Read More

ShapFuzz: Efficient Fuzzing via Shapley-Guided Byte Selection

Kunpeng Zhang (Shenzhen International Graduate School, Tsinghua University), Xiaogang Zhu (Swinburne University of Technology), Xi Xiao (Shenzhen International Graduate School, Tsinghua University), Minhui Xue (CSIRO's Data61), Chao Zhang (Tsinghua University), Sheng Wen (Swinburne University of Technology)

Read More