Language-Driven Representation Learning for Robotics


Siddharth Karamcheti
Stanford University
Suraj Nair
Stanford University
Annie S Chen
Stanford University
Thomas Kollar
Toyota Research Institute
Chelsea Finn
Stanford University
Dorsa Sadigh
Stanford University
Percy Liang
Stanford University
Paper Website

Paper ID 32

Nominated for Best Paper

Session 4. Large Data and Vision-Language Models for Robotics

Poster Session Tuesday, July 11

Poster 32

Abstract: Recent work in visual representation learning for robotics demonstrates the viability of learning from large video datasets of humans performing everyday tasks. Leveraging methods such as masked autoencoding and contrastive learning, these representations exhibit strong transfer to policy learning for visuomotor control. But, robot learning encompasses a diverse set of problems beyond control including grasp affordance prediction, language-conditioned imitation learning, and intent scoring for human-robot collaboration, amongst others. First, we demonstrate that existing representations yield inconsistent results across these tasks: masked autoencoding approaches pick up on low-level spatial features at the cost of high-level semantics, while contrastive learning approaches capture the opposite. We then introduce Voltron, a framework for language-driven representation learning from human videos and associated captions. Voltron trades off language-conditioned visual reconstruction to learn low-level visual patterns, and visually-grounded language generation to encode high-level semantics. We also construct a new evaluation suite spanning five distinct robot learning problems – a unified platform for holistically evaluating visual representations for robotics. Through comprehensive, controlled experiments across all five problems, we find that Voltron’s language-driven representations outperform the prior state-of-the-art, especially on targeted problems requiring higher-level features.