Yunzhe Tian, Yike Li, Yingxiao Xiang, Wenjia Niu, Endong Tong, and Jiqiang Liu (Beijing Jiaotong University)

Robust reinforcement learning has been a challenging problem due to always unknown differences between real and training environment. Existing efforts approached the problem through performing random environmental perturbations in learning process. However, one can not guarantee perturbation is positive. Bad ones might bring failures to reinforcement learning. Therefore, in this paper, we propose to utilize GAN to dynamically generate progressive perturbations at each epoch and realize curricular policy learning. Demo we implemented in unmanned CarRacing game validates the effectiveness.

View More Papers

On Building the Data-Oblivious Virtual Environment

Tushar Jois (Johns Hopkins University), Hyun Bin Lee, Christopher Fletcher, Carl A. Gunter (University of Illinois at Urbana-Champaign)

Read More

FARE: Enabling Fine-grained Attack Categorization under Low-quality Labeled Data

Junjie Liang (The Pennsylvania State University), Wenbo Guo (The Pennsylvania State University), Tongbo Luo (Robinhood), Vasant Honavar (The Pennsylvania State University), Gang Wang (University of Illinois at Urbana-Champaign), Xinyu Xing (The Pennsylvania State University)

Read More

WINNIE : Fuzzing Windows Applications with Harness Synthesis and...

Jinho Jung (Georgia Institute of Technology), Stephen Tong (Georgia Institute of Technology), Hong Hu (Pennsylvania State University), Jungwon Lim (Georgia Institute of Technology), Yonghwi Jin (Georgia Institute of Technology), Taesoo Kim (Georgia Institute of Technology)

Read More

Refining Indirect Call Targets at the Binary Level

Sun Hyoung Kim (Penn State), Cong Sun (Xidian University), Dongrui Zeng (Penn State), Gang Tan (Penn State)

Read More