Dayong Ye (University of Technology Sydney), Tianqing Zhu (City University of Macau), Congcong Zhu (City University of Macau), Derui Wang (CSIRO’s Data61), Kun Gao (University of Technology Sydney), Zewei Shi (CSIRO’s Data61), Sheng Shen (Torrens University Australia), Wanlei Zhou (City University of Macau), Minhui Xue (CSIRO's Data61)

Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners. However, one important area that has been largely overlooked in the research of unlearning is reinforcement learning. Reinforcement learning focuses on training an agent to make optimal decisions within an environment to maximize its cumulative rewards. During the training, the agent tends to memorize the features of the environment, which raises a significant concern about privacy. As per data protection regulations, the owner of the environment holds the right to revoke access to the agent's training data, thus necessitating the development of a novel and pressing research field, termed emph{reinforcement unlearning}. Reinforcement unlearning focuses on revoking entire environments rather than individual data samples. This unique characteristic presents three distinct challenges: 1) how to propose unlearning schemes for environments; 2) how to avoid degrading the agent's performance in remaining environments; and 3) how to evaluate the effectiveness of unlearning. To tackle these challenges, we propose two reinforcement unlearning methods. The first method is based on decremental reinforcement learning, which aims to erase the agent's previously acquired knowledge gradually. The second method leverages environment poisoning attacks, which encourage the agent to learn new, albeit incorrect, knowledge to remove the unlearning environment. Particularly, to tackle the third challenge, we introduce the concept of ``environment inference'' to evaluate the unlearning outcomes. The source code is available at url{https://github.com/cp-lab-uts/Reinforcement-Unlearning}.

View More Papers

A Method to Facilitate Membership Inference Attacks in Deep...

Zitao Chen (University of British Columbia), Karthik Pattabiraman (University of British Columbia)

Read More

On Borrowed Time – Preventing Static Side-Channel Analysis

Robert Dumitru (Ruhr University Bochum and The University of Adelaide), Thorben Moos (UCLouvain), Andrew Wabnitz (Defence Science and Technology Group), Yuval Yarom (Ruhr University Bochum)

Read More

Ring of Gyges: Accountable Anonymous Broadcast via Secret-Shared Shuffle

Wentao Dong (City University of Hong Kong), Peipei Jiang (Wuhan University; City University of Hong Kong), Huayi Duan (ETH Zurich), Cong Wang (City University of Hong Kong), Lingchen Zhao (Wuhan University), Qian Wang (Wuhan University)

Read More

GhostShot: Manipulating the Image of CCD Cameras with Electromagnetic...

Yanze Ren (Zhejiang University), Qinhong Jiang (Zhejiang University), Chen Yan (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More