VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation


Ryan Hoque, Daniel Seita, Ashwin Balakrishna, Adi Ganapathi, Ajay Tanwani, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg

Abstract

Robotic fabric manipulation has applications in home robotics, textiles, senior care and surgery. Existing fabric manipulation techniques, however, are designed for specific tasks, making it difficult to generalize across different but related tasks. We extend the Visual Foresight framework to learn fabric dynamics that can be efficiently reused to accomplish different fabric manipulation tasks with a single goal-conditioned policy. We introduce VisuoSpatial Foresight (VSF), which builds on prior work by learning visual dynamics on domain randomized RGB images and depth maps simultaneously and completely in simulation. We experimentally evaluate VSF on multi-step fabric smoothing and folding tasks against 5 baseline methods in simulation and on the da Vinci Research Kit (dVRK) surgical robot without any demonstrations at train or test time. Furthermore, we find that leveraging depth significantly improves performance. RGBD data yields an 80% improvement in fabric folding success rate over pure RGB data. Code, data, videos, and supplementary material are available at https://sites.google.com/view/fabric-vsf/.

Live Paper Discussion Information

Start Time End Time
07/14 15:00 UTC 07/14 17:00 UTC

Virtual Conference Presentation

Paper Reviews

Review 2

The paper is clearly written and provides many details. Experimental validation is mostly convincing. As learning in simulation is claimed as a contribution, I was expecting to see an ablation study for domain randomization (DR). What's the effect of DR on results in simulation and on the real robot? As this is mostly an experimental paper, it will be valuable if the authors publish the code and the experimental setup enabling to reproduce their results.