Learning Getting-Up Policies for Real-World Humanoid Robots


Xialin He, Runpei Dong, Zixuan Chen, Saurabh Gupta

Paper ID 63

Session 7. Humanoids

Poster Session (Day 2): Sunday, June 22, 6:30-8:00 PM

Abstract: Automatic recovery from falls is a crucial prerequisite before humanoid robots can be reliably deployed. Hand-designing controllers for getting up is difficult because of the varied configurations a humanoid can end up in after a fall and the challenging terrains humanoid robots are expected to operate on. This paper develops a learning framework to produce controllers that enable humanoid robots to get up from varying configurations on varying terrains. Different from previous successful applications of humanoid locomotion learning, the getting-up task involves complex contact patterns, the necessity to accurately model collision geometry, and sparser rewards. We circumvent these challenges through a two-phase approach that follows a curriculum. The first stage focuses on discovering a good get up trajectory under minimal constraints on smoothness or speed / torque limits. The second stage then refines the discovered motions into deployable (i.e. smooth and slow) motions that are robust to variations in initial configuration and terrains. We find these innovations enable a real-world G1 humanoid robot to get up from two main situations that we considered: a) lying face up and b) lying face down, both tested on flat, deformable, slippery surfaces and slopes (e.g., sloppy grass and snowfield). To the best of our knowledge, this is the first successful demonstration of learned getting-up policies for humanoid robots in the real world.