Dayong Ye (University of Technology Sydney), Tianqing Zhu (City University of Macau), Congcong Zhu (City University of Macau), Derui Wang (CSIRO’s Data61), Kun Gao (University of Technology Sydney), Zewei Shi (CSIRO’s Data61), Sheng Shen (Torrens University Australia), Wanlei Zhou (City University of Macau), Minhui Xue (CSIRO's Data61)

Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners. However, one important area that has been largely overlooked in the research of unlearning is reinforcement learning. Reinforcement learning focuses on training an agent to make optimal decisions within an environment to maximize its cumulative rewards. During the training, the agent tends to memorize the features of the environment, which raises a significant concern about privacy. As per data protection regulations, the owner of the environment holds the right to revoke access to the agent's training data, thus necessitating the development of a novel and pressing research field, termed emph{reinforcement unlearning}. Reinforcement unlearning focuses on revoking entire environments rather than individual data samples. This unique characteristic presents three distinct challenges: 1) how to propose unlearning schemes for environments; 2) how to avoid degrading the agent's performance in remaining environments; and 3) how to evaluate the effectiveness of unlearning. To tackle these challenges, we propose two reinforcement unlearning methods. The first method is based on decremental reinforcement learning, which aims to erase the agent's previously acquired knowledge gradually. The second method leverages environment poisoning attacks, which encourage the agent to learn new, albeit incorrect, knowledge to remove the unlearning environment. Particularly, to tackle the third challenge, we introduce the concept of ``environment inference'' to evaluate the unlearning outcomes. The source code is available at url{https://github.com/cp-lab-uts/Reinforcement-Unlearning}.

View More Papers

Secure Transformer Inference Made Non-interactive

Jiawen Zhang (Zhejiang University), Xinpeng Yang (Zhejiang University), Lipeng He (University of Waterloo), Kejia Chen (Zhejiang University), Wen-jie Lu (Zhejiang University), Yinghao Wang (Zhejiang University), Xiaoyang Hou (Zhejiang University), Jian Liu (Zhejiang University), Kui Ren (Zhejiang University), Xiaohu Yang (Zhejiang University)

Read More

On the Realism of LiDAR Spoofing Attacks against Autonomous...

Takami Sato (University of California, Irvine), Ryo Suzuki (Keio University), Yuki Hayakawa (Keio University), Kazuma Ikeda (Keio University), Ozora Sako (Keio University), Rokuto Nagata (Keio University), Ryo Yoshida (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

Read More

Dissecting Payload-based Transaction Phishing on Ethereum

Zhuo Chen (Zhejiang University), Yufeng Hu (Zhejiang University), Bowen He (Zhejiang University), Dong Luo (Zhejiang University), Lei Wu (Zhejiang University), Yajin Zhou (Zhejiang University)

Read More

Ctrl+Alt+Deceive: Quantifying User Exposure to Online Scams

Platon Kotzias (Norton Research Group, BforeAI), Michalis Pachilakis (Norton Research Group, Computer Science Department University of Crete), Javier Aldana Iuit (Norton Research Group), Juan Caballero (IMDEA Software Institute), Iskander Sanchez-Rola (Norton Research Group), Leyla Bilge (Norton Research Group)

Read More