Yunzhe Tian, Yike Li, Yingxiao Xiang, Wenjia Niu, Endong Tong, and Jiqiang Liu (Beijing Jiaotong University)

Robust reinforcement learning has been a challenging problem due to always unknown differences between real and training environment. Existing efforts approached the problem through performing random environmental perturbations in learning process. However, one can not guarantee perturbation is positive. Bad ones might bring failures to reinforcement learning. Therefore, in this paper, we propose to utilize GAN to dynamically generate progressive perturbations at each epoch and realize curricular policy learning. Demo we implemented in unmanned CarRacing game validates the effectiveness.

View More Papers

BaseSpec: Comparative Analysis of Baseband Software and Cellular Specifications...

Eunsoo Kim (KAIST), Dongkwan Kim (KAIST), CheolJun Park (KAIST), Insu Yun (KAIST), Yongdae Kim (KAIST)

Read More

Reining in the Web's Inconsistencies with Site Policy

Stefano Calzavara (Università Ca' Foscari Venezia), Tobias Urban (Institute for Internet Security and Ruhr University Bochum), Dennis Tatang (Ruhr University Bochum), Marius Steffens (CISPA Helmholtz Center for Information Security), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More

Rosita: Towards Automatic Elimination of Power-Analysis Leakage in Ciphers

Madura A. Shelton (University of Adelaide), Niels Samwel (Radboud University), Lejla Batina (Radboud University), Francesco Regazzoni (University of Amsterdam and ALaRI – USI), Markus Wagner (University of Adelaide), Yuval Yarom (University of Adelaide and Data61)

Read More