Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models


Ted Xiao
Google Inc
Harris Chan
University of Toronto
Pierre Sermanet
Google Inc
Ayzaan Wahid
Google Inc
Anthony Brohan
Google Research
Karol Hausman
Google Brain
Sergey Levine
Google Inc
Jonathan Tompson
Google Inc
Paper Website

Paper ID 29

Session 4. Large Data and Vision-Language Models for Robotics

Poster Session Tuesday, July 11

Poster 29

Abstract: Robotic manipulation policies that follow natural language instructions are typically trained from corpora of robot-language data that were either collected with specific tasks in mind or expensively relabeled by humans with varied language descriptions in hindsight. Recently, large-scale pretrained vision-language models (VLMs) like CLIP or ViLD have been applied to robotics for learning representations and scene descriptors. Can these pretrained models serve as automatic labelers for robot data, effectively importing Internet-scale knowledge into existing datasets to make them useful even for tasks that are not reflected in their ground truth annotations? For example, if the original annotations contained simple task descriptions such as “pick up the apple”, a pretrained VLM-based labeler could significantly expand the number of semantic concepts available in the data and introduce spatial concepts such as “the apple on the right side of the table” or alternative phrasings such as “the red colored fruit”. To accomplish this, we introduce Data-driven Instruction Augmentation for Language-conditioned control (DIAL): we utilize semi-supervised language labels leveraging the semantic understanding of CLIP to propagate knowledge onto large datasets of unlabeled demonstration data and then train language-conditioned policies on the augmented datasets. This method enables cheaper acquisition of useful language descriptions compared to expensive human labels, allowing for more efficient label coverage of large-scale datasets. We apply DIAL to a challenging real-world robotic manipulation domain where 96.5% of the 80,000 demonstrations do not contain crowd-sourced language annotations. Through a large-scale study of over 1,300 real world evaluations, we find that DIAL enables imitation learning policies to acquire new capabilities and generalize to 60 novel instructions unseen in the original dataset.