StructDiffusion: Language-Guided Creation of Physically-Valid Structures using Unseen Objects


Weiyu Liu
Georgia Tech
Yilun Du
Massachusetts Institute of Technology
Tucker Hermans
University of Utah
Sonia Chernova
Georgia Tech
Chris Paxton
Meta AI
Paper Website

Paper ID 31

Session 4. Large Data and Vision-Language Models for Robotics

Poster Session Tuesday, July 11

Poster 31

Abstract: Robots operating in human environments must be able to rearrange objects into semantically-meaningful configurations, even if these objects are previously unseen. In this work, we focus on the problem of building physically-valid structures without step-by-step instructions. We propose StructDiffusion, which combines a diffusion model and an object-centric transformer to construct structures given partial-view point clouds and high-level language goals, such as “set the table”. Our method can perform multiple challenging language-conditioned multi-step 3D planning tasks using one model. StructDiffusion even improves the success rate of assembling physically-valid structures out of unseen objects by on average 16% over an existing multi-modal transformer model trained on specific structures. We show experiments on held-out objects in both simulation and on real-world rearrangement tasks. Importantly, we show how integrating both a diffusion model and a collision-discriminator model allows for improved generalization over other methods when rearranging previously-unseen objects.