Manipulator-Independent Representations for Visual Imitation


Yuxiang Zhou (DeepMind),
Yusuf Aytar (DeepMind),
Konstantinos Bousmalis (DeepMind)
Paper Website
Paper Website
Paper #002
Interactive Poster Session V Interactive Poster Session VIII

0d 00h 00m

0d 00h 00m


Abstract

Imitation learning is an effective tool for robotic learning tasks where specifying a reinforcement learning (RL) reward is not feasible or where the exploration problem is particularly difficult. Imitation, typically behavior cloning or inverse RL, derive a policy from a collection of first-person action-state trajectories. This is contrary to how humans and other animals imitate: we observe a behavior, even from other species, understand its perceived effect on the state of the environment, and figure out what actions our body can perform to reach a similar outcome. In this work, we explore the possibility of third-person visual imitation of manipulation trajectories, only from vision and without access to actions, demonstrated by embodiments different to the ones of our imitating agent. Specifically, we investigate what would be an appropriate representation method with which an RL agent can visually track trajectories of complex manipulation behavior —non-planar with multiple-object interactions— demonstrated by experts with different embodiments. We present a way to train manipulator-independent representations (MIR) that primarily focus on the change in the environment and have all the characteristics that make them suitable for cross-embodiment visual imitation with RL: domain-invariant, temporally smooth, and actionable. We show that with our proposed method our agents are able to imitate, with complex robot control, trajectories from a variety of embodiments and with significant visual and dynamics differences, e.g. simulation-to-reality gap.

Spotlight Presentation

Previous Paper Paper Website Next Paper