Chen Gong (University of Vriginia), Kecen Li (Chinese Academy of Sciences), Jin Yao (University of Virginia), Tianhao Wang (University of Virginia)

Reinforcement learning (RL) trains an agent from experiences interacting with the environment. In scenarios where online interactions are impractical, offline RL, which trains the agent using pre-collected datasets, has become popular. While this new paradigm presents remarkable effectiveness across various real-world domains, like healthcare and energy management, there is a growing demand to enable agents to rapidly and completely eliminate the influence of specific trajectories from both the training dataset and the trained agents. To meet this problem, this paper advocates TRAJDELETER, the first practical approach to trajectory unlearning for offline RL agents. The key idea of TRAJDELETER is to guide the agent to demonstrate deteriorating performance when it encounters states associated with unlearning trajectories. Simultaneously, it ensures the agent maintains its original performance level when facing other remaining trajectories. Additionally, we introduce TRAJAUDITOR, a simple yet efficient method to evaluate whether TRAJDELETER successfully eliminates the specific trajectories of influence from the offline RL agent. Extensive experiments conducted on six offline RL algorithms and three tasks demonstrate that TRAJDELETER requires only about 1.5% of the time needed for retraining from scratch. It effectively unlearns an average of 94.8% of the targeted trajectories yet still performs well in actual environment interactions after unlearning. The replication package and agent parameters are available.

View More Papers

SCAMMAGNIFIER: Piercing the Veil of Fraudulent Shopping Website Campaigns

Marzieh Bitaab (Arizona State University), Alireza Karimi (Arizona State University), Zhuoer Lyu (Arizona State University), Adam Oest (Amazon), Dhruv Kuchhal (Amazon), Muhammad Saad (X Corp.), Gail-Joon Ahn (Arizona State University), Ruoyu Wang (Arizona State University), Tiffany Bao (Arizona State University), Yan Shoshitaishvili (Arizona State University), Adam Doupé (Arizona State University)

Read More

Mysticeti: Reaching the Latency Limits with Uncertified DAGs

Kushal Babel (Cornell Tech & IC3), Andrey Chursin (Mysten Labs), George Danezis (Mysten Labs & University College London (UCL)), Anastasios Kichidis (Mysten Labs), Lefteris Kokoris-Kogias (Mysten Labs & IST Austria), Arun Koshy (Mysten Labs), Alberto Sonnino (Mysten Labs & University College London (UCL)), Mingwei Tian (Mysten Labs)

Read More

Mixnets on a Tightrope: Quantifying the Leakage of Mix...

Sebastian Meiser, Debajyoti Das, Moritz Kirschte, Esfandiar Mohammadi, Aniket Kate

Read More