Learning and Adapting Agile Locomotion Skills by Transferring Experience


Laura M Smith
University of California, Berkeley
J. Chase Kew
Google Brain
Tianyu Li
Meta
Linda Luu
Google Inc
Xue Bin Peng
University of California, Berkeley"
Sehoon Ha
Georgia Tech
Jie Tan
Google Inc
Sergey Levine
University of California, Berkeley
Paper Website

Paper ID 51

Session 7. Mobile Manipulation and Locomotion

Poster Session Wednesday, July 12

Poster 19

Abstract: Legged robots have enormous potential in their range of capabilities, from navigating unstructured terrains to high-speed running. However, these capabilities bring with them difficult control problems, and designing controllers for highly agile dynamic motions remains a substantial challenge for roboticists. Reinforcement learning (RL) offers a promising data-driven approach for automatically training such controllers. However, exploration in these high-dimensional, underactuated systems remains a significant hurdle for enabling legged robots to learn performant, naturalistic, and versatile agility skills. We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks. To leverage controllers we can acquire in practice, we design this framework to be flexible in terms of their source—that is, the controllers may have been optimized for a different objective under different dynamics, or may require different knowledge of the surroundings—and thus may be highly suboptimal for the target task. We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments. We also demonstrate that the agile behaviors learned in this way are graceful and safe enough to deploy in the real world.