Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Removing information from a machine learning model is a non-trivial task that requires to partially revert the training process. This task is unavoidable when sensitive data, such as credit card numbers or passwords, accidentally enter the model and need to be removed afterwards. Recently, different concepts for machine unlearning have been proposed to address this problem. While these approaches are effective in removing individual data points, they do not scale to scenarios where larger groups of features and labels need to be reverted.
In this paper, we propose the first method for unlearning features and labels. Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of
model parameters. It enables to adapt the influence of training data on a learning model retrospectively, thereby correcting data leaks and privacy issues. For learning models with strongly convex loss functions, our method provides certified unlearning with theoretical guarantees. For models with non-convex losses, we empirically show that unlearning features and labels is effective and significantly faster than other strategies.

View More Papers

Why do Internet Devices Remain Vulnerable? A Survey with...

Tamara Bondar, Hala Assal, AbdelRahman Abdou (Carleton University)

Read More

Focusing on Pinocchio's Nose: A Gradients Scrutinizer to Thwart...

Jiayun Fu (Huazhong University of Science and Technology), Xiaojing Ma (Huazhong University of Science and Technology), Bin B. Zhu (Microsoft Research Asia), Pingyi Hu (Huazhong University of Science and Technology), Ruixin Zhao (Huazhong University of Science and Technology), Yaru Jia (Huazhong University of Science and Technology), Peng Xu (Huazhong University of Science and Technology), Hai…

Read More

Preventing SIM Box Fraud Using Device Model Fingerprinting

BeomSeok Oh (KAIST), Junho Ahn (KAIST), Sangwook Bae (KAIST), Mincheol Son (KAIST), Yonghwa Lee (KAIST), Min Suk Kang (KAIST), Yongdae Kim (KAIST)

Read More

Short: Certifiably Robust Perception Against Adversarial Patch Attacks: A...

Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

Read More