Discovering Generalizable Skills via Automated Generation of Diverse Tasks


Kuan Fang (Stanford University),
Yuke Zhu (University of Texas at Austin),
Silvio Savarese (Stanford University),
Li Fei-Fei (Stanford University)
Paper Website
Paper #010
Interactive Poster Session I Interactive Poster Session IV

0d 00h 00m

0d 00h 00m


Abstract

The learning efficiency of an intelligent agent can be greatly improved by utilizing a useful set of skills. However, the design of robot skills can often be intractable in real-world applications due to the prohibitive amount of effort and expertise that it requires. In this work, we introduce Skill Learning In Diversified Environments (SLIDE), a method to discover generalizable skills via automated generation of a diverse set of tasks. As opposed to prior work on unsupervised discovery of skills which incentivizes the skills to produce different outcomes in the same environment, our method pairs each skill with a unique task produced by a trainable task generator. To encourage generalizable skills to emerge, our method trains each skill to specialize in the paired task and maximizes the diversity of the generated tasks. A task discriminator defined on the robot behaviors in the generated tasks is jointly trained to estimate the evidence lower bound of the diversity objective. The learned skills can then be composed in a hierarchical reinforcement learning algorithm to solve unseen target tasks. We demonstrate that the proposed method can effectively learn a variety of robot skills in two tabletop manipulation domains. Our results suggest that the learned skills can effectively improve the robot’s performance in various unseen target tasks compared to existing reinforcement learning and skill learning methods.

Spotlight Presentation

Previous Paper Paper Website Next Paper