GenAug: Retargeting behaviors to unseen situations via Generative Augmentation


Qiuyu Chen
University of Washington
Shosuke C Kiami
University of Washington
Abhishek Gupta
University of Washington
Vikash Kumar
University of Washington
Paper Website

Paper ID 10

Nominated for Best System Paper

Session 2. Manipulation from Demonstrations and Teleoperation

Poster Session Tuesday, July 11

Poster 10

Abstract: Robot learning methods have the potential for widespread generalization across tasks, environments, and objects. However, these methods are severely limited by the amount of data that they are provided or are able to collect. Robots in the real world are likely to only be able to collect a small dataset, both in terms of data quantity and diversity. For robot learning to generalize, we must be able to leverage sources of data or priors beyond the robot’s own experience. In this work, we posit that image-text generative models, which are pre-trained on large corpora of web-scraped data, can serve as such a source of data. We show that despite these generative models being trained on largely non-robotics data, they can serve as effective ways to impart priors into the process of robot learning in the real world in a way that enables widespread generalization. In particular, we show how pre-trained generative models for in- painting can serve as effective tools for semantically meaningful data augmentation. By leveraging these pre-trained models for generating appropriate “functional” data augmentations, we propose a system GenAug that is able to significantly improve policy generalization. We apply GenAug to tabletop manipulation tasks, showing the ability to retarget behavior to novel scenarios, while only requiring marginal amounts of real-world data. We demonstrate the efficacy of this system on a number of object manipulation problems in the real world, showing a 40% improvement in generalization to novel scenes and objects.