Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Removing information from a machine learning model is a non-trivial task that requires to partially revert the training process. This task is unavoidable when sensitive data, such as credit card numbers or passwords, accidentally enter the model and need to be removed afterwards. Recently, different concepts for machine unlearning have been proposed to address this problem. While these approaches are effective in removing individual data points, they do not scale to scenarios where larger groups of features and labels need to be reverted.
In this paper, we propose the first method for unlearning features and labels. Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of
model parameters. It enables to adapt the influence of training data on a learning model retrospectively, thereby correcting data leaks and privacy issues. For learning models with strongly convex loss functions, our method provides certified unlearning with theoretical guarantees. For models with non-convex losses, we empirically show that unlearning features and labels is effective and significantly faster than other strategies.

View More Papers

Hope of Delivery: Extracting User Locations From Mobile Instant...

Theodor Schnitzler (Research Center Trustworthy Data Science and Security, TU Dortmund, and Ruhr-Universität Bochum), Katharina Kohls (Radboud University), Evangelos Bitsikas (Northeastern University and New York University Abu Dhabi), Christina Pöpper (New York University Abu Dhabi)

Read More

Augmented Reality’s Potential for Identifying and Mitigating Home Privacy...

Stefany Cruz (Northwestern University), Logan Danek (Northwestern University), Shinan Liu (University of Chicago), Christopher Kraemer (Georgia Institute of Technology), Zixin Wang (Zhejiang University), Nick Feamster (University of Chicago), Danny Yuxing Huang (New York University), Yaxing Yao (University of Maryland), Josiah Hester (Georgia Institute of Technology)

Read More

A Robust Counting Sketch for Data Plane Intrusion Detection

Sian Kim (Ewha Womans University), Changhun Jung (Ewha Womans University), RhongHo Jang (Wayne State University), David Mohaisen (University of Central Florida), DaeHun Nyang (Ewha Womans University)

Read More

OBI: a multi-path oblivious RAM for forward-and-backward-secure searchable encryption

Zhiqiang Wu (Changsha University of Science and Technology), Rui Li (Dongguan University of Technology)

Read More