Problems and solutions in offline learning

Current reinforcement learning algorithms require tons of data to solve new tasks. This “hunger for data” is especially onerous in high stake scenarios like healthcare or education, where on-policy data collection is infeasible.

This motivates the Batch RL setting where the learner is only given a batch of transition tuples but allowed no further interactions with the environment. As this data was not necessarily collected by an expert it is no option to do imitation learning. While Q-learning and derived methods like DQN or SAC are in theory capable of doing off-policy learning, they dramatically fail to do so.

In this talk we will examine the cause for this surprising failure of powerful off-policy algorithms and proposals to fix it. Recent BatchRL papers provide also a tale of disagreement on evaluation scenarios.


  • Off-Policy Deep Reinforcement Learning without Exploration, Scott Fujimoto, David Meger, Doina Precup. arXiv:1812.02900 [cs, stat] (2019)
  • Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction, Aviral Kumar, Justin Fu, George Tucker, Sergey Levine. arXiv:1906.00949 [cs, stat] (2019)
  • Behavior Regularized Offline Reinforcement Learning, Yifan Wu, George Tucker, Ofir Nachum. arXiv:1911.11361 [cs, stat] (2019)

In this series