Hindsight States: Blending Sim & Real Task Elements for Efficient Reinforcement Learning


Simon Guist
Max Planck Institute for Intelligent Systems
Jan Schneider
Max Planck Institute for Intelligent Systems
Vincent Berenz
Max Planck Institute for Intelligent Systems
Alexander Dittrich
Max Planck Institute for Intelligent Systems
Bernhard Schölkopf
Max Planck Institute for Intelligent Systems
Dieter Büchler
Max Planck Institute for Intelligent Systems
Paper Website

Paper ID 38

Session 5. Simulation and Sim2Real

Poster Session Wednesday, July 12

Poster 6

Abstract: Reinforcement learning has shown great potential in solving complex tasks when large amounts of data can be generated with little effort. In robotics, one approach to generate training data builds on simulations or models. However, for many tasks, such as with complex soft robots, devising such models is substantially more challenging. Recent successes in soft robotics indicate that employing complex robots can lead to performance boosts. Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently. We (i) abstract the task into distinct components, (ii) off-load the simple dynamics parts into the simulation, and (iii) multiply these virtual parts to generate more data in hindsight. Our new method, Hindsight States (HiS), uses this data and selects the most useful transitions for training. It can be used with an arbitrary off-policy algorithm. We validate our method on several challenging simulated tasks and demonstrate that it improves learning both on its own and when combined with an existing hindsight algorithm, Hindsight Experience Replay (HER). Finally, we evaluate HiS on a physical system and show that it boosts performance on a complex table tennis task with a muscular robot.