FlowBot3D: Learning 3D Articulation Flow to Manipulate Articulated Objects


Benjamin Eisner,
Harry Zhang,
David Held (Carnegie Mellon University)
Paper Website
Paper #018
Session 3. Long talks


Abstract

We explore a novel method to perceive and manipulate 3D articulated objects that generalizes to enable a robot to articulate unseen classes of objects. We propose a vision-based system that learns to predict the potential motions of the parts of a variety of articulated objects to guide downstream motion planning of the system to articulate the objects. To predict the object motions, we train a neural network to output a dense vector field representing the point-wise motion direction of the points in the point cloud under articulation. The system then will deploy an analytical motion planning policy based on this vector field to achieve a policy that yields maximum articulation. We train the vision system entirely in simulation, and then demonstrate the capability of our system to generalize to unseen object instances and novel categories in both simulation and the real world, deploying our policy on a Sawyer robot with no retraining. Results suggest that our system achieves state-of-the-art performance in both simulated and real-world experiments. Code, data, and supplementary materials are available at https://sites.google.com/view/articulated-flowbot-3d/home.

Previous Paper Paper Website Next Paper